首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 0 毫秒
1.
使用SQL/XML与XQuery技术发布关系数据   总被引:1,自引:0,他引:1  
随着Web技术的发展,可扩展置标语言(XML)已经迅速成为数据交换的标准.目前绝大部分的业务数据仍然保存在关系数据库系统中,传统的数据库需扩展其功能以支持XML技术.如何在现有的数据库平台上结合运用SQL/XML标准或XQuery技术发布数据,已经成为一个研究热点.结合具体的查询案例,基于Oracle数据库,运用这两种技术进行数据发布,并比较其各自的使用特点.  相似文献   

2.
XML在关系数据库中的存储问题是XML研究领域中的一个重要问题。在总结多种映射方法的基础上,提出了一种方法将多个相似的XML文档进行解析,根据映射关系,生成各自的关系模式,并分析归纳出一个集成的关系模式,然后创建一个关系数据库,并在映射关系的基础上提取并存储XML文档数据到关系数据库。此方法以较为简洁的结构保存了XML文档的数据信息,其最大的特点就是不用考虑文档的模式信息(DTD,XML Schema)。并通过一个具体的实验结果来说明这种方法的有效性。  相似文献   

3.
Joseph Fong  Herbert Shiu  Davy Cheung 《Software》2008,38(11):1183-1213
Integrating information from multiple data sources is becoming increasingly important for enterprises that partner with other companies for e‐commerce. However, companies have their internal business applications deployed on diverse platforms and no standard solution for integrating information from these sources exists. To support business intelligence query activities, it is useful to build a data warehouse on top of middleware that aggregates the data obtained from various heterogeneous database systems. Online analytical processing (OLAP) can then be used to provide fast access to materialized views from the data warehouse. Since extensible markup language (XML) documents are a common data representation standard on the Internet and relational tables are commonly used for production data, OLAP must handle both relational and XML data. SQL and XQuery can be used to process the materialized relational and XML data cubes created from the aggregated data. This paper shows how to handle the two kinds of data cubes from a relational–XML data warehouse using extract, transformation and loading. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

4.
聂玲  刘波 《计算机应用》2010,30(11):2941-2944
根据XML Schema中组件的定义及组件之间的嵌套关系,建立一系列从XML Schema转换成关系模式的结构映射规则和语义映射规则。基于这些规则实现一种转换算法,从Schema中提取出关系模式,并且证明映射得到的关系模式满足4NF。结果表明得到的关系模式不仅包含了XML Schema中所有的结构和内容信息,还能保留大部分语义约束信息,减少存储冗余。  相似文献   

5.
裴松  武彤 《微型机与应用》2013,32(17):56-59
为从企业生产线上XML半结构化数据中抽取富有意义数据,分析了XML半结构化数据和关系数据库中结构化数据特点,以及XML半结构化数据在关系数据库中的存储方法。针对实际应用,提出采用扩展哈弗曼前缀编码方法,对XML文档树进行唯一编码,实现XML文档与关系数据库映射,同时给出最长前缀匹配策略,支持数据查询,以提高查询效率。  相似文献   

6.
The correctness of the data managed by database systems is vital to any application that utilizes data for business, research, and decision-making purposes. To guard databases against erroneous data not reflecting real-world data or business rules, semantic integrity constraints can be specified during database design. Current commercial database management systems provide various means to implement mechanisms to enforce semantic integrity constraints at database run-time. In this paper, we give an overview of the semantic integrity support in the most recent SQL-standard SQL:1999, and we show to what extent the different concepts and language constructs proposed in this standard can be found in major commercial (object-)relational database management systems. In addition, we discuss general design guidelines that point out how the semantic integrity features provided by these systems should be utilized in order to implement an effective integrity enforcing subsystem for a database. Received: 14 August 2000 / Accepted: 9 March 2001 / Published online: 7 June 2001  相似文献   

7.
8.
Streamgraphs were popularized in 2008 when The New York Times used them to visualize box office revenues for 7500 movies over 21 years. The aesthetics of a streamgraph is affected by three components: the ordering of the layers, the shape of the lowest curve of the drawing, known as the baseline, and the labels for the layers. As of today, the ordering and baseline computation algorithms proposed in the paper of Byron and Wattenberg are still considered the state of the art. However, their ordering algorithm exploits statistical properties of the movie revenue data that may not hold in other data . In addition, the baseline optimization is based on a definition of visual energy that in some cases results in considerable amount of visual distortion. We offer an ordering algorithm that works well regardless of the properties of the input data , and propose a 1‐norm based definition of visual energy and the associated solution method that overcomes the limitation of the original baseline optimization procedure. Furthermore, we propose an efficient layer labeling algorithm that scales linearly to the data size in place of the brute‐force algorithm adopted by Byron and Wattenberg. We demonstrate the advantage of our algorithms over existing techniques on a number of real world data sets.  相似文献   

9.
This communication addresses the analytical PID tuning rules for integrating processes. First, this paper provides an analytical tuning method of two-degree-of-freedom (2-Dof) PID controller using an enhanced internal model control (IMC) principle. On the basis of the robustness analyses, the presented method can easily achieve the performance/robustness tradeoff by specifying a desired robustness degree. Second, an analytical tuning method of one-degree-of-freedom (1-Dof) PID also is proposed in terms of performance/robustness and servo/regulator tradeoffs, which are not commonly considered for 1-Dof controller design. The servo/regulator tradeoff is formulated as a constrained optimization problem to provide output responses as similar as possible to those produced by the 2-Dof PID controller. The presented PID settings are applicable for a wide range of integrating processes. Simulation studies show the effectiveness and merits of the proposed method.  相似文献   

10.
The success of the Semantic Web crucially depends on the easy creation, integration, and use of semantic data. For this purpose, we consider an integration scenario that defies core assumptions of current metadata construction methods. We describe a framework of metadata creation where Web pages are generated from a database and the database owner is cooperatively participating in the Semantic Web. This leads us to the deep annotation of the database—directly by annotation of the logical database schema or indirectly by annotation of the Web presentation generated from the database contents. From this annotation, one may execute data mapping and/or migration steps, and thus prepare the data for use in the Semantic Web. We consider deep annotation as particularly valid because: (i) dynamic Web pages generated from databases outnumber static Web pages, (ii) deep annotation may be a very intuitive way to create semantic data from a database, and (iii) data from databases should remain where it can be handled most efficiently—in its databases. Interested users can then query this data directly or choose to materialize the data as RDF files.  相似文献   

11.
12.
13.
Computer aided process planning (CAPP) systems have had limited success in integrating business functions and product manufacturing due to the inaccessibility and incompatibility of information residing in proprietary software. While large companies have developed or purchased complex order management and engineering applications, smaller manufacturers continue to use semi-automated and manual methods for managing information throughout the lifecycle of each new product and component. There is a need for reconfigurable and reprogrammable systems that combine advances in computer aided design (CAD/Computer Aided Manufacturing (CAM) technology and intelligent machining with product data management for documentation and cost control. The goal of this research is to demonstrate an architecture in which customer service, CAPP and a costing methodology known as activity based costing (ABC) are incorporated into a single system, thereby allowing companies to monitor and study how expenditures are incurred and which resources are being used by each job. The material presented in this paper is the result of a two year university and industry sponsored research project in which professors and students at the Costa Rica Institute of Technology developed a software application for FEMA Industrial S.A., a local machining and fabrication shop with sixty five employees and both conventional and CNC capabilities. The final results represent not only a significant contribution to local industry and to the students’ education but, also to the continuing growth of CAPP. Implementing better decision making tools and standardizing transactions in digital format would reduce the workload on critical personnel and archive valuable knowledge for analyzing company methods and expertise.  相似文献   

14.
15.
The allosteric pocket of the Dengue virus (DENV2) NS2B/NS3 protease, which is proximal to its catalytic triad, represents a promising drug target (Othman et al., 2008). We have explored this binding site through large-scale virtual screening and molecular dynamics simulations followed by calculations of binding free energy. We propose two mechanisms for enzyme inhibition. A ligand may either destabilize electronic density or create steric effects relating to the catalytic triad residues NS3-HIS51, NS3-ASP75, and NS3-SER135. A ligand may also disrupt movement of the C-terminal of NS2B required for inter-conversion between the “open” and “closed” conformations. We found that chalcone and adenosine derivatives had the top potential for drug discovery hits, acting through both inhibitory mechanisms. Studying the molecular mechanisms of these compounds might be helpful in further investigations of the allosteric pocket and its potential for drug discovery.  相似文献   

16.
Fan and Dai [Comput. Phys. Commun. 153 (2003) 17] have found a series of traveling wave solutions for nonlinear equations by applying a direct approach with computerized symbolic computations. They have claimed that the proposed method, in comparison with most existing symbolic computation methods such as a tanh method and Jacobi function method, not only give new and more general solutions, but also provides a guideline to classify various types of the solution according to some parameters. We show that the claims by Fan and Dai are wrong since some of the solutions do not satisfy the differential equation that they have adopted for the algebraic method.  相似文献   

17.
Extensive growth in functional brain imaging, perfusion-weighted imaging, diffusion-weighted imaging, brain mapping and brain scanning techniques has led tremendously to the importance of cerebral cortical segmentation both in 2-D and 3-D from volumetric brain magnetic resonance imaging data sets. Besides that, recent growth in deformable brain segmentation techniques in 2-D and 3-D has brought the engineering community, such as the areas of computer vision, image processing, pattern recognition and graphics, closer to the medical community, such as to neuro-surgeons, psychiatrists, oncologists, neuro-radiologists and internists. In Part I of this research (see Suri et al [1]), an attempt was made to review the state-of-the-art in 2-D and 3-D cerebral cortical segmentation techniques from brain magnetic resonance imaging based on two main classes: region- and boundary/surface-based. More than 18 different techniques for segmenting the cerebral cortex from brain slices acquired in orthogonal directions were shown using region-based techniques. We also showed more than ten different techniques to segment the cerebral cortex from magnetic resonance brain volumes using boundary/surface-based techniques. This paper (Part II) focuses on presenting state-of-the-art systems based on the fusion of boundary/surface-based with region-based techniques, also called regional-geometric deformation models, which takes the paradigm of partial differential equations in the level set framework. We also discuss the pros and cons of these various techniques, besides giving the mathematical foundations for each sub-class in the cortical taxonomy. Special emphasis is placed on discussing the advantages, validation, challenges and neuro-science/clinical applications of cortical segmentation. Received: 25 August 2000, Received in revised form: 28 March 2001, Accepted: 28 March 2001  相似文献   

18.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号