首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Due to the increase of XML-based applications, XML schema design has become an important task. One approach is to consider conceptual schemas as a basis for generating XML documents compliant to consensual information of specific domains. However, the conversion of conceptual schemas to XML schemas is not a straightforward process and inconvenient design decisions can lead to a poor query processing on XML documents generated. This paper presents a conversion approach which considers data and query workload estimated for XML applications, in order to generate an XML schema from a conceptual schema. Load information is used to produce XML schemas which can respond well to the main queries of an XML application. We evaluate our approach through a case study carried out on a native XML database. The experimental results demonstrate that the XML schemas generated by our methodology contribute to a better query performance than related approaches.
Ronaldo dos Santos MelloEmail:
  相似文献   

2.
Service-oriented computing (SOC) is the computing paradigm that utilizes services as a fundamental building block. Services are self-describing, open components intended to support composition of distributed applications. Currently, Web services provide a standard-based realization of SOC due to: (1) the machine-readable format (XML) of their functional and nonfunctional specifications, and (2) their messaging protocols built on top of the Internet. However, how to methodologically identify, specify, design, deploy and manage a sound and complete set of Web services to move to a service-oriented architecture (SOA) is still an issue. This paper describes a process for reverse engineering relational database applications architecture into SOA architecture, where SQL statements are insulated from the applications, factored, implemented, and registered as Web services to be discovered, selected, and reused in composing e-business solutions. The process is based on two types of design patterns: schema transformation pattern and CRUD operations pattern. First, the schema transformation pattern allows an identification of the services. Then the CRUD operations pattern allows a specification of the abstract part of the identified services, namely their port types. This process is implemented as a CASE tool, which assists analysts specifying services that implement common, reusable, basic business logic and data manipulation.
Youcef BaghdadiEmail:
  相似文献   

3.
There is great potential for the development of many new applications using data on mobile objects and mobile regions. To promote these kinds of applications advanced data management techniques for the representation and analysis of mobility-related data are needed. Together with application experts (behavioural ecologists), we investigate how two novel data management approaches may help. We focus on a case study concerning the analysis of fauna behaviour, in particular crested porcupines, which represents a typical example of mobile object monitoring. The first technique we experiment with is a recently developed conceptual spatio-temporal data modelling approach, MADS. This is used to model the schema of the database suited to our case study. Relying on this first outcome a subset of the problem is represented in the logical language MuTACLP. This allows us to formalise and solve the queries which enable the behavioural ecologists to derive crested porcupines behaviour from the raw data on animal movements. Finally, we investigate the support from a commercial Geographic Information System (GIS) for the analysis of spatio-temporal data. We present a way to integrate MuTACLP and a GIS, combining the advantages of GIS technology and the expressive power of MuTACLP.
A. RaffaetàEmail:
  相似文献   

4.
Ant colony optimization inspired resource discovery in P2P Grid systems   总被引:1,自引:1,他引:0  
It is a challenge for the traditional centralized or hierarchical Grid architecture to manage the large-scale and dynamic resources, while providing scalability. The Peer-to-Peer (P2P) model offers a prospect of dynamicity, scalability, and availability of a large pool of resources. By integrating the P2P philosophy and techniques into a Grid architecture, P2P Grid system is emerging as a promising platform for executing large-scale, resource intensive applications. There are two typical resource discovery approaches for a large-scale P2P system. The first one is an unstructured approach which propagates the query messages to all nodes to locate the required resources. The method does not scale well because each individual query generates a large amount of traffic and the network quickly becomes overwhelmed by the messages. The second one is a structured approach which places resources at specified locations to make subsequent queries easier to satisfy. However, the method does not support multi-attribute range queries and may not work well in the network which has an extremely transient population. This paper proposes and designs a large-scale P2P Grid system which employs an Ant Colony Optimization (ACO) algorithm to locate the required resources. The ACO method avoids a large-scale flat flooding and supports multi-attribute range query. Multiple ants can be employed to improve the parallelism of the method. A simulator is developed to evaluate the proposed resource discovery mechanism. Comprehensive simulation results validate the effectiveness of the proposed method compared with the traditional unstructured and structured approaches.
Yuhui DengEmail: Email:
  相似文献   

5.
RRSi: indexing XML data for proximity twig queries   总被引:2,自引:2,他引:0  
Twig query pattern matching is a core operation in XML query processing. Indexing XML documents for twig query processing is of fundamental importance to supporting effective information retrieval. In practice, many XML documents on the web are heterogeneous and have their own formats; documents describing relevant information can possess different structures. Therefore some “user-interesting” documents having similar but non-exact structures against a user query are often missed out. In this paper, we propose the RRSi, a novel structural index designed for structure-based query lookup on heterogeneous sources of XML documents supporting proximate query answers. The index avoids the unnecessary processing of structurally irrelevant candidates that might show good content relevance. An optimized version of the index, oRRSi, is also developed to further reduce both space requirements and computational complexity. To our knowledge, these structural indexes are the first to support proximity twig queries on XML documents. The results of our preliminary experiments show that RRSi and oRRSi based query processing significantly outperform previously proposed techniques in XML repositories with structural heterogeneity.
Vincent T. Y. NgEmail:
  相似文献   

6.
VIREX provides an interactive approach for querying and integrating relational databases to produce XML documents and the corresponding schemas. VIREX connects to each database specified by the user; analyzes the catalogue to derive an interactive diagram equivalent to the extended entity-relationship diagram; allows the user to display sample records from the tables in the database; allows the user to rename columns and relations by modifying directly the interactive diagram; facilitates the conversion of the relational database into XML; and derives the XML schema. VIREX works even when the catalogue of the relational database is missing; it extracts the required catalogue information by analyzing the database content. Further, VIREX supports VRXQuery, which is a visual naive-users-oriented query language that allows users to specify queries and define views directly on the interactive diagram as a sequence of mouse clicks with minimum keyboard input. The user is expected to interactively decide on certain factors to be considered in producing the XML result. Such factors include: 1) selecting the relations/attributes to be converted into XML; 2) specifying a predicate to be satisfied by the information to be converted into XML; 3) deciding on the order of nesting between the relations to be converted into XML; 4) ordering for the result. VRXQuery supports selection, projection, nesting/join, union, difference, and order-by. As the result of a query, VIREX displays on the screen the XML schema that satisfies the specified characteristics and generates colored (easy to read) XML document(s). Further, VIREX allows the user to display and review the SQL and XQuery equivalent to each query expressed in VRXQuery.  相似文献   

7.
8.
XFlavor: providing XML features in media representation   总被引:1,自引:1,他引:0  
We present XFlavor, a framework for providing XML representation of multimedia data. XFlavor can be used to convert multimedia data back and forth between binary and XML representations. Compared to bitstreams, XML documents are easier to access and manipulate, and consequently, the development of multimedia processing software is greatly facilitated, as one generic XML parser can be used to read and write different types of data in XML form.
Alexandros EleftheriadisEmail:
  相似文献   

9.
eb 3 is a trace-based formal language created for the specification of information systems. In eb 3, each entity and association attribute is independently defined by a recursive function on the valid traces of external events. This paper describes an algorithm that generates, for each external event, a transaction that updates the value of affected attributes in their relational database representation. The benefits are twofold: eb 3 attribute specifications are automatically translated into executable programs, eliminating system design and implementation steps; the construction of information systems is streamlined, because eb 3 specifications are simpler and shorter to write than corresponding traditional specifications, design and implementations. In particular, the paper shows that simple eb 3 constructs can replace complex SQL queries which are typically difficult to write.
Régine LaleauEmail:
  相似文献   

10.
The gap between storing data in relational databases and transferring data in form of XML has been closed e.g. by SQL/XML queries that generate XML data out of relational data sources. However, only few relational database systems support the evaluation of SQL/XML queries. And even in those systems supporting SQL/XML, the evaluation of such queries is quite slow compared to the evaluation of SQL queries. In this paper, we present S2CX, an approach that allows to efficiently evaluate SQL/XML queries on any relational database system, no matter whether it supports SQL/XML or not. As a result to an SQL/XML query, S2CX supports different output formats ranging from plain XML to different compressed XML representations including a succinct encoding of XML data, schema-aware compressed XML to grammar compressed XML. In many cases, S2CX produces compressed XML as a result to an SQL/XML query even faster than the evaluation of SQL/XML queries into non-compressed XML as provided by Oracle 11 g and by DB2. Furthermore, our approach to query evaluation scales better, i.e., the larger the dataset, the faster is our approach compared to SQL/XML query evaluation in Oracle 11 g and in DB2.  相似文献   

11.
Various techniques have been developed for different query types in content-based image retrieval systems such as sampling queries, constrained sampling queries, multiple constrained sampling queries, k-NN queries, constrained k-NN queries, and multiple localized k-NN queries. In this paper, we propose a generalized query model suitable for expressing queries of different types, and investigate efficient processing techniques for this new framework. We exploit sequential access and data sharing by developing new storage and query processing techniques to leverage inter-query concurrency. Our experimental results, based on the Corel dataset, indicate that the proposed optimization can significantly reduce average response time in a multiuser environment, and achieve better retrieval precision and recall compared to two recent techniques.
Ning YuEmail:
  相似文献   

12.
Recently, Peer-to-Peer (P2P) has become a popular paradigm for building distributed systems, aiming to provide resource localization and sharing in large-scale networks. However, advanced searching for resources remains an open issue. The flooding technique used by some P2P systems is expensive in bandwidth usage, and shows a serious lack in scalability. Also, more efficient systems based on distributed hash tables (DHT) lack in query expressiveness and flexibility. This paper addresses this issue by discussing existing solutions, and proposing a novel approach to support advanced multi-keyword queries in the context of P2P systems. It extends the existing, and widely established DHT-based localization frameworks. This new approach provides an effective resource localization framework; it can substantially reduce bandwidth consumption and improve load balancing over the network. Moreover, various kinds of applications can be deployed on top of this generic framework. As a relevant use case, this paper describes a novel service discovery and management application.
Nazim AgoulmineEmail:
  相似文献   

13.
In spite of significant improvements in video data retrieval, a system has not yet been developed that can adequately respond to a user’s query. Typically, the user has to refine the query many times and view query results until eventually the expected videos are retrieved from the database. The complexity of video data and questionable query structuring by the user aggravates the retrieval process. Most previous research in this area has focused on retrieval based on low-level features. Managing imprecise queries using semantic (high-level) content is no easier than queries based on low-level features due to the absence of a proper continuous distance function. We provide a method to help users search for clips and videos of interest in video databases. The video clips are classified as interesting and uninteresting based on user browsing. The attribute values of clips are classified by commonality, presence, and frequency within each of the two groups to be used in computing the relevance of each clip to the user’s query. In this paper, we provide an intelligent query structuring system, called I-Quest, to rank clips based on user browsing feedback, where a template generation from the set of interesting and uninteresting sets is impossible or yields poor results.
Ramazan Savaş Aygün (Corresponding author)Email:
  相似文献   

14.
In this paper, we aim to provide adaptive multimedia services especially video ones to end-users in an efficient and secure manner. Users moving outside the office should be able to maintain an office-like environment at their current locations. First, the agents within our proposed architecture negotiate the different communication and interaction factors autonomously and dynamically. Moreover, we needed to develop a user agent in addition to service and system agents that could negotiate the requirements and capabilities at run time to furnish best possible service results. Thus we designed and integrated a video indexing and key framing service within our overall agent-based architecture. We integrated this video indexing and content-based analysis service to adapt the video content according to run time conditions. We designed a video XML schema to validate the media content out of this multimedia service according to specific requirements and features, as we will describe later.
Ahmed KarmouchEmail:
  相似文献   

15.
In the past decade, the number of mobile devices has increased significantly. These devices are in turn showing more computational capabilities. It is therefore possible to envision a near future where client applications may be deployed on these devices. There are, however, constraints that hinder this deployment, especially the limited communication bandwidth and storage space available. This paper describes the Efficient XML Data Exchange Manager (EXEM) that combines context-dependent lossy and lossless compression mechanisms used to support lightweight exchange of objects in XML format between server and client applications. The lossy compression mechanism reduces the size of XML messages by using known information about the application. The lossless compression mechanism decouples data and metadata (compression dictionary) content. We illustrate the use of EXEM with a prototype implementation of the lossless compression mechanism that shows the optimization of the available resources on the server and the mobile client. These experimental results demonstrate the efficiency of the EXEM approach for XML data exchange in the context of mobile application development.
Serhan DagtasEmail:
  相似文献   

16.
Text search engines are inadequate for indexing and searching XML documents because they ignore metadata and aggregation structure implicit in the XML documents. On the other hand, the query languages supported by specialized XML search engines are very complex. In this paper, we present a simple yet flexible query language, and develop its semantics to enable intuitively appealing extraction of relevant fragments of information while simultaneously falling back on retrieval through plain text search if necessary. Our approach combines and generalizes several available techniques to obtain precise and coherent results.
Trivikram ImmaneniEmail: URL: http://www.cs.wright.edu/~tkprasad
  相似文献   

17.
Multirelational classification: a multiple view approach   总被引:1,自引:0,他引:1  
Multirelational classification aims at discovering useful patterns across multiple inter-connected tables (relations) in a relational database. Many traditional learning techniques, however, assume a single table or a flat file as input (the so-called propositional algorithms). Existing multirelational classification approaches either “upgrade” mature propositional learning methods to deal with relational presentation or extensively “flatten” multiple tables into a single flat file, which is then solved by propositional algorithms. This article reports a multiple view strategy—where neither “upgrading” nor “flattening” is required—for mining in relational databases. Our approach learns from multiple views (feature set) of a relational databases, and then integrates the information acquired by individual view learners to construct a final model. Our empirical studies show that the method compares well in comparison with the classifiers induced by the majority of multirelational mining systems, in terms of accuracy obtained and running time needed. The paper explores the implications of this finding for multirelational research and applications. In addition, the method has practical significance: it is appropriate for directly mining many real-world databases.
Herna L. ViktorEmail:
  相似文献   

18.
Information imprecision and uncertainty exist in many real-world applications and for this reason fuzzy data management has been extensively investigated in various database management systems. Currently, introducing native support for XML data in relational database management systems (RDBMs) has attracted considerable interest with a view to leveraging the powerful and reliable data management services provided by RDBMs. Although there is a rich literature on XML-to-relational storage, none of the existing solutions satisfactorily addresses the problem of storing fuzzy XML data in RDBMs. In this paper, we study the methodology of storing and querying fuzzy XML data in relational databases. In particular, we present an edge-based approach to shred fuzzy XML data into relational data. The unique feature of our approach is that no schema information is required for our data storage. On this basis, we present a generic approach to translate path expression queries into SQL for processing XML queries.  相似文献   

19.
Over the last 15 years, database management systems (DBMSs) have been enhanced by the addition of rule-based programming to obtain active DBMSs. One of the greatest challenges in this area is to formally account for all the aspects of active behavior using a uniform formalism. In this paper, we formalize active relational databases within the framework of the situation calculus by uniformly accounting for them using theories embodying non-Markovian control in the situation calculus. We call these theories active relational theories and use them to capture the dynamics of active databases. Transaction processing and rule execution is modelled as a theorem proving task using active relational theories as background axioms. We show that the major components of an ADBMS, namely the rule sets and the execution models, may be given a clear semantics using active relational theories. More precisely: we represent the rule set as a program written in a suitable version of the situation calculus based language ConGolog; then we extend an existing situation calculus based framework for modelling advanced transaction models to one for modelling the execution models of active behaviors.
Iluju KiringaEmail:
  相似文献   

20.
With the rapid advancements in positioning technologies such as the Global Positioning System (GPS) and wireless communications, the tracking of continuously moving objects has become more convenient. However, this development poses new challenges to database technology since maintaining up-to-date information regarding the location of moving objects incurs an enormous amount of updates. Existing indexes can no longer keep up with the high update rate while providing speedy retrieval at the same time. This study aims to improve k nearest neighbor (kNN) query performance while reducing update costs. Our approach is based on an important observation that queries usually occur around certain places or spatial landmarks of interest, called reference points. We propose the Reference-Point-based tree (RP-tree), which is a two-layer index structure that indexes moving objects according to reference points. Experimental results show that the RP-tree achieves significant improvement over the TPR-tree.
Aoying ZhouEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号