首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present the design of ObjectGlobe, a distributed and open query processor for Internet data sources. Today, data is published on the Internet via Web servers which have, if at all, very localized query processing capabilities. The goal of the ObjectGlobe project is to establish an open marketplace in which data and query processing capabilities can be distributed and used by any kind of Internet application. Furthermore, ObjectGlobe integrates cycle providers (i.e., machines) which carry out query processing operators. The overall picture is to make it possible to execute a query with – in principle – unrelated query operators, cycle providers, and data sources. Such an infrastructure can serve as enabling technology for scalable e-commerce applications, e.g., B2B and B2C market places, to be able to integrate data and data processing operations of a large number of participants. One of the main challenges in the design of such an open system is to ensure privacy and security. We discuss the ObjectGlobe security requirements, show how basic components such as the optimizer and runtime system need to be extended, and present the results of performance experiments that assess the additional cost for secure distributed query processing. Another challenge is quality of service management so that users can constrain the costs and running times of their queries. Received: 30 October 2000 / Accepted: 14 March 2001 Published online: 7 June 2001  相似文献   

2.
Workflow management systems are becoming a relevant support for a large class of business applications, and many workflow models as well as commercial products are currently available. While the large availability of tools facilitates the development and the fulfilment of customer requirements, workflow application development still requires methodological guidelines that drive the developers in the complex task of rapidly producing effective applications. In fact, it is necessary to identify and model the business processes, to design the interfaces towards existing cooperating systems, and to manage implementation aspects in an integrated way. This paper presents the WIRES methodology for developing workflow applications under a uniform modelling paradigm – UML modelling tools with some extensions – that covers all the life cycle of these applications: from conceptual analysis to implementation. High-level analysis is performed under different perspectives, including a business and an organisational perspective. Distribution, interoperability and cooperation with external information systems are considered in this early stage. A set of “workflowability” criteria is provided in order to identify which candidate processes are suited to be implemented as workflows. Non-functional requirements receive particular emphasis in that they are among the most important criteria for deciding whether workflow technology can be actually useful for implementing the business process at hand. The design phase tackles aspects of concurrency and cooperation, distributed transactions and exception handling. Reuse of component workflows, available in a repository as workflow fragments, is a distinguishing feature of the method. Implementation aspects are presented in terms of rules that guide in the selection of a commercial workflow management system suitable for supporting the designed processes, coupled with guidelines for mapping the designed workflows onto the model offered by the selected system.  相似文献   

3.
Concurrency control in hierarchical multidatabase systems   总被引:1,自引:0,他引:1  
Over the past decade, significant research has been done towards developing transaction management algorithms for multidatabase systems. Most of this work assumes a monolithic architecture of the multidatabase system with a single software module that follows a single transaction management algorithm to ensure the consistency of data stored in the local databases. This monolithic architecture is not appropriate in a multidatabase environment where the system spans multiple different organizations that are distributed over various geographically distant locations. In this paper, we propose an alternative multidatabase transaction management architecture, where the system is hierarchical in nature. Hierarchical architecture has consequences on the design of transaction management algorithms. An implication of the architecture is that the transaction management algorithms followed by a multidatabase system must be composable– that is, it must be possible to incorporate individual multidatabase systems as elements in a larger multidatabase system. We present a hierarchical architecture for a multidatabase environment and develop techniques for concurrency control in such systems. Edited by R. Sacks-Davis. Received June 27, 1994 / Accepted September 26, 1995  相似文献   

4.
The increasing power of modern computers is steadily opening up new application domains for advanced data processing such as engineering and knowledge-based applications. To meet their requirements, concepts for advanced data management have been investigated during the last decade, especially in the field of object orientation. Over the last couple of years, the database group at the University of Kaiserslautern has been developing such an advanced database system, the KRISYS prototype. In this article, we report on the results and experiences obtained in the course of this project. The primary objective for the first version of KRISYS was to provide semantic features, such as an expressive data model, a set-oriented query language, deductive as well as active capabilities. The first KRISYS prototype became completely operational in 1989. To evaluate its features and to stabilize its functionality, we started to develop several applications with the system. These experiences marked the starting point for an overall redesign of KRISYS. Major goals were to tune KRISYS and its query-processing facilities to a suitable client/server environment, as well as to provide elaborate mechanisms for consistency control comprising semantic integrity constraints, multi-user synchronization, and failure recovery. The essential aspects of the resulting client/server architecture are embodied by the client-side data management needed to effectively support advanced applications and to gain the required system performance for interactive work. The project stages of KRISYS properly reflect the essential developments that have taken place in the research on advanced database systems over the last years. Hence, the subsequent discussions will bring up a number of important aspects with regard to advanced data processing that are of significant general importance, as well as of general applicability to database systems. Received June 18, 1996 / Accepted November 11, 1997  相似文献   

5.
Active database management systems (DBMSs) are a fast-growing area of research, mainly due to the large number of applications which can benefit from this active dimension. These applications are far from being homogeneous, requiring different kinds of functionalities. However, most of the active DBMSs described in the literature only provide a fixed, hard-wired execution model to support the active dimension. In object-oriented DBMSs, event-condition-action rules have been propo sed for providing active behaviour. This paper presents EXACT, a rule manager for object-oriented DBMSs which provides a variety of options from which the designer can choose the one that best fits the semantics of the concept to be supported by rules. Due to the difficulty of foreseeing future requirements, special attention has been paid to making rule management easily extensible, so that the user can tailor it to suit specific applications. This has been borne out by an implementation in ADAM, an object -oriented DBMS. An example is shown of how the default mechanism can be easily extended to support new requirements. Edited by Y. Vassiliou. Received May 26, 1994 / Revised January 26, 1995, June 22, 1996 / Accepted November 4, 1996  相似文献   

6.
inverse subdivision algorithms , with linear time and space complexity, to detect and reconstruct uniform Loop, Catmull–Clark, and Doo–Sabin subdivision structure in irregular triangular, quadrilateral, and polygonal meshes. We consider two main applications for these algorithms. The first one is to enable interactive modeling systems that support uniform subdivision surfaces to use popular interchange file formats which do not preserve the subdivision structure, such as VRML, without loss of information. The second application is to improve the compression efficiency of existing lossless connectivity compression schemes, by optimally compressing meshes with Loop subdivision connectivity. Our Loop inverse subdivision algorithm is based on global connectivity properties of the covering mesh, a concept motivated by the covering surface from Algebraic Topology. Although the same approach can be used for other subdivision schemes, such as Catmull–Clark, we present a Catmull–Clark inverse subdivision algorithm based on a much simpler graph-coloring algorithm and a Doo–Sabin inverse subdivision algorithm based on properties of the dual mesh. Straightforward extensions of these approaches to other popular uniform subdivision schemes are also discussed. Published online: 3 July 2002  相似文献   

7.
CORBA的一个重要的服务是OMG定义面向对象的事务服务(OTS),它是对分布事务进行处理的,由于事务处理的重要性,OTS本应该作为ORB的一部分来提高事务处理应用的效率,基于上述考虑,OMG在OTS中提出了interposition技术,文章在方法论的基础上探讨如何采用该技术实现OTS。  相似文献   

8.
Integration – supporting multiple application classes with heterogeneous performance requirements – is an emerging trend in networks, file systems, and operating systems. We evaluate two architectural alternatives – partitioned and integrated – for designing next-generation file systems. Whereas a partitioned server employs a separate file system for each application class, an integrated file server multiplexes its resources among all application classes; we evaluate the performance of the two architectures with respect to sharing of disk bandwidth among the application classes. We show that although the problem of sharing disk bandwidth in integrated file systems is conceptually similar to that of sharing network link bandwidth in integrated services networks, the arguments that demonstrate the superiority of integrated services networks over separate networks are not applicable to file systems. Furthermore, we show that: an integrated server outperforms the partitioned server in a large operating region and has slightly worse performance in the remaining region; the capacity of an integrated server is larger than that of the partitioned server; and an integrated server outperforms the partitioned server by a factor of up to 6 in the presence of bursty workloads.  相似文献   

9.
CORBA的一个重要的服务是OMG定义面向对象的事务服务(OTS),它是对分布事务进行处理的。由于事务处理的重要性,OTS本应该作为ORB的一部分来提高事务处理应用的效率。基于上述考虑,OMG在OTS中提出了interposition技术,文章在方法论的基础上探讨如何采用该技术实现OTS。  相似文献   

10.
Integrating and customizing heterogeneous e-commerce applications   总被引:4,自引:0,他引:4  
A broad spectrum of electronic commerce applications is currently available on the Web, providing services in almost any area one can think of. As the number and variety of such applications grow, more business opportunities emerge for providing new services based on the integration and customization of existing applications. (Web shopping malls and support for comparative shopping are just a couple of examples.) Unfortunately, the diversity of applications in each specific domain and the disparity of interfaces, application flows, actor roles in the business transaction, and data formats, renders the integration and manipulation of applications a rather difficult task. In this paper we present the Application Manifold system, aimed at simplifying the intricate task of integration and customization of e-commerce applications. The scope of the work in this paper is limited to web-enabled e-commerce applications. We do not support the integration/customization of proprietary/legacy applications. The wrapping of such applications as web services is complementary to our work. Based on the emerging Web data standard, XML, and application modeling standard, UML, the system offers a novel declarative specification language for describing the integration/customization task, supporting a modular approach where new applications can be added and integrated at will with minimal effort. Then, acting as an application generator, the system generates a full integrated/customized e-commerce application, with the declarativity of the specification allowing for the optimization and verification of the generated application. The integration here deals with the full profile of the given e-commerce applications: the various services offered by the applications, the activities and roles of the different actors participating in the application (e.g., customers, vendors), the application flow, as well as with the data involved in the process. This is in contrast to previous works on Web data integration that focused primarily on querying the data available in the applications, mostly ignoring the additional aspects mentioned above. Received: 30 October 2000 / Accepted 14 March 2001 Published online: 2 August 2001  相似文献   

11.
Effective timestamping in databases   总被引:3,自引:0,他引:3  
Many existing database applications place various timestamps on their data, rendering temporal values such as dates and times prevalent in database tables. During the past two decades, several dozen temporal data models have appeared, all with timestamps being integral components. The models have used timestamps for encoding two specific temporal aspects of database facts, namely transaction time, when the facts are current in the database, and valid time, when the facts are true in the modeled reality. However, with few exceptions, the assignment of timestamp values has been considered only in the context of individual modification statements. This paper takes the next logical step: It considers the use of timestamping for capturing transaction and valid time in the context of transactions. The paper initially identifies and analyzes several problems with straightforward timestamping, then proceeds to propose a variety of techniques aimed at solving these problems. Timestamping the results of a transaction with the commit time of the transaction is a promising approach. The paper studies how this timestamping may be done using a spectrum of techniques. While many database facts are valid until now, the current time, this value is absent from the existing temporal types. Techniques that address this problem using different substitute values are presented. Using a stratum architecture, the performance of the different proposed techniques are studied. Although querying and modifying time-varying data is accompanied by a number of subtle problems, we present a comprehensive approach that provides application programmers with simple, consistent, and efficient support for modifying bitemporal databases in the context of user transactions. Received: March 11, 1998 / Accepted July 27, 1999  相似文献   

12.
A new algorithm for clipping a line segment against a pyramid in E 3 is presented. This algorithm avoids computation of intersection points that are not end points of the output line segment. It also solves all cases more effectively. The performance of this algorithm is shown to be consistently better than that of existing algorithms, including the Cohen–Sutherland, Liang–Barsky, and Cyrus–Beck algorithms.  相似文献   

13.
Client-server object-oriented database management systems differ significantly from traditional centralized systems in terms of their architecture and the applications they target. In this paper, we present the client-server architecture of the EOS storage manager and we describe the concurrency control and recovery mechanisms it employs. EOS offers a semi-optimistic locking scheme based on the multi-granularity two-version two-phase locking protocol. Under this scheme, multiple concurrent readers are allowed to access a data item while it is being updated by a single writer. Recovery is based on write-ahead redo-only logging. Log records are generated at the clients and they are shipped to the server during normal execution and at transaction commit. Transaction rollback is fast because there are no updates that have to be undone, and recovery from system crashes requires only one scan of the log for installing the changes made by transactions that committed before the crash. We also present a preliminary performance evaluation of the implementation of the above mechanisms. Edited by R. King. Received July 1993 / Accepted May 1996  相似文献   

14.
Susan D. Urban  Ling Fu  Jami J. Shah 《Software》1999,29(14):1313-1338
Many computer applications today require some form of distributed computing to allow different software components to communicate. Several different commercial products now exist based on the Common Object Request Broker Architecture (CORBA) of the Object Management Group. The use of such tools, however, often requires the modification of existing systems, rather than the development of new applications. The objective of this research has been to integrate the use of a CORBA tool into an existing engineering design application for the purpose of (1) evaluating the amount of re‐engineering that is involved to effectively integrate distributed object computing into an existing application, and (2) evaluating the use and performance of distributed object computing in an engineering domain, which often requires the transfer of large amounts of information. The results of this work demonstrate that CORBA technology can be easily integrated into existing applications. The ease of the integration as well as the efficiency of the resulting system, however, depends upon the degree of modification that developers are willing to consider in the re‐engineering process. The most transparent approach to the use of CORBA requires less modification and generally produces less efficient performance. The less transparent approach to the use of CORBA can potentially require significant system modification but produce greater performance gains. This work outlines issues that must be considered for the partitioning of functionality between the client and the server, development of an IDL interface, development of client and server‐side wrappers, and support for concurrent, multi‐user access. In addition, this work also provides performance and implementation comparisons of different techniques for the use of wrappers and for the transfer of large data files between the client and the server. Performance comparisons for the incorporation of concurrent access are also presented. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

15.
Building knowledge base management systems   总被引:1,自引:0,他引:1  
Advanced applications in fields such as CAD, software engineering, real-time process control, corporate repositories and digital libraries require the construction, efficient access and management of large, shared knowledge bases. Such knowledge bases cannot be built using existing tools such as expert system shells, because these do not scale up, nor can they be built in terms of existing database technology, because such technology does not support the rich representational structure and inference mechanisms required for knowledge-based systems. This paper proposes a generic architecture for a knowledge base management system intended for such applications. The architecture assumes an object-oriented knowledge representation language with an assertional sublanguage used to express constraints and rules. It also provides for general-purpose deductive inference and special-purpose temporal reasoning. Results reported in the paper address several knowledge base management issues. For storage management, a new method is proposed for generating a logical schema for a given knowledge base. Query processing algorithms are offered for semantic and physical query optimization, along with an enhanced cost model for query cost estimation. On concurrency control, the paper describes a novel concurrency control policy which takes advantage of knowledge base structure and is shown to outperform two-phase locking for highly structured knowledge bases and update-intensive transactions. Finally, algorithms for compilation and efficient processing of constraints and rules during knowledge base operations are described. The paper describes original results, including novel data structures and algorithms, as well as preliminary performance evaluation data. Based on these results, we conclude that knowledge base management systems which can accommodate large knowledge bases are feasible. Edited by Gunter Schlageter and H.-J. Schek. Received May 19, 1994 / Revised May 26, 1995 / Accepted September 18, 1995  相似文献   

16.
Easy concurrency     
Advances in technology raise expectations. As far as software engineering is concerned, the common expectation is that coding and deploying applications is going to be simple. It seems, though, that software engineering is not getting easier, and the complexity moves to an application domain. One of the sources of complexity is an application concurrency. It is not an uncommon development practice that concurrency and transaction management in multi-user, multi-threaded, event-driven applications are postponed until after most of the required functionality is implemented. This situation has various explanations. On the one hand, business logic may require access and modification of large sets of inter-connected application objects. On the other, testing and stress-testing of this logic becomes possible only at advanced stages of product development. At these stages, increasing lock granularities may appear to be less "expensive" than debugging race conditions and deadlocks. Coarse-grained locking has, of course, an adverse effect on application scalability. Declaring rules of concurrency outside of the application may solve part of the problem. This paper presents an approach allowing developers to define concurrency in application-specific terms, design it in the early stages of development, and implement it using a documented API of the concurrency engine (CE). Simple notation makes it possible to record concurrency specifications in terms of application operations, relationships between application resources, and synchronization conflicts between operations. These concepts are demonstrated on examples. The final sections include the CE UML diagram, notes on API usage, and performance benchmarks. Published online: 25 July 2001  相似文献   

17.
The problem of an instructive and realistic animation and visualization of the shadow- and color-conditions during conjunctions of actively and passively illuminated cosmic objects has found only particularly satisfying solutions so far. As an example we study a total solar eclipse. There are didactic shortcomings of specialized astronomical software, even though solutions have been given, which are very impressive for experts. Using the possibilities of commercial 3D-animation software we give an object-oriented partial solution. In order to get correct astronomical representations we model – for different tasks – the object space under cinematic aspects with parameters for spatial and temporal scaling, for illumination and coloring under couplings of varying strength. The adaptation of the parameters to optimal acceptance of the spectator must be done a posteriori.  相似文献   

18.
Most environments are passive– deaf, dumb and blind, unaware of their inhabitants and unable to assist them in a meaningful way. However, with the advent of ubiquitous computing – ever smaller, cheaper and faster computational devices embedded in a growing variety of “smart” objects – it is becoming increasingly possible to create active environments: physical spaces that can sense and respond appropriately to the people and activities taking place within them. Most of the early ubiquitous computing applications focus on how individuals interact with their environments as they work on foreground tasks. In contrast, this paper focuses on how groups of people affect and are affected by background aspects of their environments.  相似文献   

19.
Location is one of the most important elements of context in ubiquitous computing. In this paper we describe a location model, a spatial-aware communication model and an implementation of the models that exploit location for processing and communicating context. The location model presented describes a location tree, which contains human-readable semantic and geometric information about an organisation and a structure to describe the current location of an object or a context. The proposed system is dedicated to work not only on more powerful devices like handhelds, but also on small computer systems that are embedded into everyday artefact (making them a digital artefact). Model and design decisions were made on the basis of experiences from three prototype setups with several applications, which we built from 1998 to 2002. While running these prototypes we collected experiences from designers, implementers and users and formulated them as guidelines in this paper. All the prototype applications heavily use location information for providing their functionality. We found that location is not only of use as information for the application but also important for communicating context. In this paper we introduce the concept of spatial-aware communication where data is communicated based on the relative location of digital artefacts rather than on their identity. Correspondence to: Michael Biegl, Telecooperation Office (TecO), University of Karlsruhe, Vincenz-Prieβritz-Str. 1 D-76131 Karlsruhe, Germany. Email: michael@teco.edu  相似文献   

20.
Dealing with forward and backward jumps in workflow management systems   总被引:1,自引:0,他引:1  
Workflow management systems (WfMS) offer a promising technology for the realization of process-centered application systems. A deficiency of existing WfMS is their inadequate support for dealing with exceptional deviations from the standard procedure. In the ADEPT project, therefore, we have developed advanced concepts for workflow modeling and execution, which aim at the increase of flexibility in WfMS. On the one hand we allow workflow designers to model exceptional execution paths already at buildtime provided that these deviations are known in advance. On the other hand authorized users may dynamically deviate from the pre-modeled workflow at runtime as well in order to deal with unforeseen events. In this paper, we focus on forward and backward jumps needed in this context. We describe sophisticated modeling concepts for capturing deviations in workflow models already at buildtime, and we show how forward and backward jumps (of different semantics) can be correctly applied in an ad-hoc manner during runtime as well. We work out basic requirements, facilities, and limitations arising in this context. Our experiences with applications from different domains have shown that the developed concepts will form a key part of process flexibility in process-centered information systems. Received: 6 October 2002 / Accepted: 8 January 2003 Published online: 27 February 2003 This paper is a revised and extended version of [40]. The described work was partially performed in the research project “Scalability in Adaptive Workflow Management Systems” funded by the Deutsche Forschungsgemeinschaft (DFG).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号