首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Workflow management systems are becoming a relevant support for a large class of business applications, and many workflow models as well as commercial products are currently available. While the large availability of tools facilitates the development and the fulfilment of customer requirements, workflow application development still requires methodological guidelines that drive the developers in the complex task of rapidly producing effective applications. In fact, it is necessary to identify and model the business processes, to design the interfaces towards existing cooperating systems, and to manage implementation aspects in an integrated way. This paper presents the WIRES methodology for developing workflow applications under a uniform modelling paradigm – UML modelling tools with some extensions – that covers all the life cycle of these applications: from conceptual analysis to implementation. High-level analysis is performed under different perspectives, including a business and an organisational perspective. Distribution, interoperability and cooperation with external information systems are considered in this early stage. A set of “workflowability” criteria is provided in order to identify which candidate processes are suited to be implemented as workflows. Non-functional requirements receive particular emphasis in that they are among the most important criteria for deciding whether workflow technology can be actually useful for implementing the business process at hand. The design phase tackles aspects of concurrency and cooperation, distributed transactions and exception handling. Reuse of component workflows, available in a repository as workflow fragments, is a distinguishing feature of the method. Implementation aspects are presented in terms of rules that guide in the selection of a commercial workflow management system suitable for supporting the designed processes, coupled with guidelines for mapping the designed workflows onto the model offered by the selected system.  相似文献   

2.
In this paper, we present an approach to global transaction management in workflow environments. The transaction mechanism is based on the well-known notion of compensation, but extended to deal with both arbitrary process structures to allow cycles in processes and safepoints to allow partial compensation of processes. We present a formal specification of the transaction model and transaction management algorithms in set and graph theory, providing clear, unambiguous transaction semantics. The specification is straightforwardly mapped to a modular architecture, the implementation of which is first applied in a testing environment, then in the prototype of a commercial workflow management system. The modular nature of the resulting system allows easy distribution using middleware technology. The path from abstract semantics specification to concrete, real-world implementation of a workflow transaction mechanism is thus covered in a complete and coherent fashion. As such, this paper provides a complete framework for the application of well-founded transactional workflows. Received: 16 November 1999 / Accepted 29 August 2001 Published online: 6 November 2001  相似文献   

3.
Requirements Engineering-Based Conceptual Modelling   总被引:1,自引:1,他引:1  
The software production process involves a set of phases where a clear relationship and smooth transitions between them should be introduced. In this paper, a requirements engineering-based conceptual modelling approach is introduced as a way to improve the quality of the software production process. The aim of this approach is to provide a set of techniques and methods to capture software requirements and to provide a way to move from requirements to a conceptual schema in a traceable way. The approach combines a framework for requirements engineering (TRADE) and a graphical object-oriented method for conceptual modelling and code generation (OO-Method). The intended improvement of the software production process is accomplished by providing a precise methodological guidance to go from the user requirements (represented through the use of the appropriate TRADE techniques) to the conceptual schema that properly represents them (according to the conceptual constructs provided by the OO-Method). Additionally, as the OO-Method provides full model-based code generation features, this combination minimises the time dedicated to obtaining the final software product.  相似文献   

4.
Algebraic query optimisation for database programming languages   总被引:1,自引:0,他引:1  
A major challenge still facing the designers and implementors of database programming languages (DBPLs) is that of query optimisation. We investigate algebraic query optimisation techniques for DBPLs in the context of a purely declarative functional language that supports sets as first-class objects. Since the language is computationally complete issues such as non-termination of expressions and construction of infinite data structures can be investigated, whilst its declarative nature allows the issue of side effects to be avoided and a richer set of equivalences to be developed. The language has a well-defined semantics which permits us to reason formally about the properties of expressions, such as their equivalence with other expressions and their termination. The support of a set bulk data type enables much prior work on the optimisation of relational languages to be utilised. In the paper we first give the syntax of our archetypal DBPL and briefly discuss its semantics. We then define a small but powerful algebra of operators over the set data type, provide some key equivalences for expressions in these operators, and list transformation principles for optimising expressions. Along the way, we identify some caveats to well-known equivalences for non-deductive database languages. We next extend our language with two higher level constructs commonly found in functional DBPLs: set comprehensions and functions with known inverses. Some key equivalences for these constructs are provided, as are transformation principles for expressions in them. Finally, we investigate extending our equivalences for the set operators to the analogous operators over bags. Although developed and formally proved in the context of a functional language, our findings are directly applicable to other DBPLs of similar expressiveness. Edited by Matthias Jarke, Jorge Bocca, Carlo Zaniolo. Received September 15, 1994 / Accepted September 1, 1995  相似文献   

5.
6.
Advanced e-services require efficient, flexible, and easy-to-use workflow technology that integrates well with mainstream Internet technologies such as XML and Web servers. This paper discusses an XML-enabled architecture for distributed workflow management that is implemented in the latest version of our Mentor-lite prototype system. The key asset of this architecture is an XML mediator that handles the exchange of business and flow control data between workflow and business-object servers on the one hand and client activities on the other via XML messages over http. Our implementation of the mediator has made use of Oracle's XSQL servlet. The major benefit of the advocated architecture is that it provides seamless integration of client applications into e-service workflows with scalable efficiency and very little explicit coding, in contrast to an earlier, Java-based, version of our Mentor-lite prototype that required much more code and exhibited potential performance problems. Received: 30 October 2000 / Accepted: 19 December 2000 Published online: 27 April 2001  相似文献   

7.
Executable Petri net models for the analysis of metabolic pathways   总被引:2,自引:0,他引:2  
Computer-assisted simulation of biochemical processes is a means to augment the knowledge about the control mechanisms of such processes in particular organisms. This knowledge can be helpful for the goal-oriented design of drugs. Normally, continuous models (differential equations) are chosen for modelling such processes. The application of discrete event systems such as Petri nets has been restricted in the past to low-level modelling and qualitative analysis. To demonstrate that Petri nets are indeed suitable for simulating metabolic pathways, the glycolysis and citric acid cycle are selected as well-understood examples of enzymatic reaction chains (metabolic pathways). The paper discusses the steps that lead from gaining necessary knowledge about the involved enzymes and substances, to establishing and tuning high-level net models, to performing a series of simulations, and finally to analysing the results. We show that the consistent application of the Petri net view to these tasks has certain advantages, and – using advanced net tools – reasonable simulation times can be achieved. Published online: 24 August 2001  相似文献   

8.
Dealing with forward and backward jumps in workflow management systems   总被引:1,自引:0,他引:1  
Workflow management systems (WfMS) offer a promising technology for the realization of process-centered application systems. A deficiency of existing WfMS is their inadequate support for dealing with exceptional deviations from the standard procedure. In the ADEPT project, therefore, we have developed advanced concepts for workflow modeling and execution, which aim at the increase of flexibility in WfMS. On the one hand we allow workflow designers to model exceptional execution paths already at buildtime provided that these deviations are known in advance. On the other hand authorized users may dynamically deviate from the pre-modeled workflow at runtime as well in order to deal with unforeseen events. In this paper, we focus on forward and backward jumps needed in this context. We describe sophisticated modeling concepts for capturing deviations in workflow models already at buildtime, and we show how forward and backward jumps (of different semantics) can be correctly applied in an ad-hoc manner during runtime as well. We work out basic requirements, facilities, and limitations arising in this context. Our experiences with applications from different domains have shown that the developed concepts will form a key part of process flexibility in process-centered information systems. Received: 6 October 2002 / Accepted: 8 January 2003 Published online: 27 February 2003 This paper is a revised and extended version of [40]. The described work was partially performed in the research project “Scalability in Adaptive Workflow Management Systems” funded by the Deutsche Forschungsgemeinschaft (DFG).  相似文献   

9.
This paper proposes a new hand-held device called “InfoPoint” that allows appliances to work together over a network. We have applied the idea of “drag-and-drop” operation as provided in the GUIs of PC and workstation desktop environment. InfoPoint provides a unified interface that gives different types of appliances “drag-and-drop”-like behaviour for the transfer of data. Moreover, it can transfer data from/to non-appliances such as pieces of paper. As a result, InfoPoint allows appliances to work together, in the real-world environment, in terms of data transfer. A prototype of InfoPoint has been implemented and several experimental applications have been investigated. InfoPoint has shown its applicability in a variety of circumstances. We believe that the idea proposed in this paper will be a significant technology in the network of the future.  相似文献   

10.
The combination of SGML and database technology allows to refine both declarative and navigational access mechanisms for structured document collection: with regard to declarative access, the user can formulate complex information needs without knowing a query language, the respective document type definition (DTD) or the underlying modelling. Navigational access is eased by hyperlink-rendition mechanisms going beyond plain link-integrity checking. With our approach, the database-internal representation of documents is configurable. It allows for an efficient implementation of operations, because DTD knowledge is not needed for document structure recognition. We show how the number of method invocations and the cost of parsing can be significantly reduced. Edited by Y.C. Tay. Received April 22, 1996 / Accepted March 16, 1997  相似文献   

11.
The aim of this paper is to introduce the socio-technical Role Activity Diagram modelling language to National Health Service (NHS) information systems requirements engineering using a process approach. Most requirements engineering in the NHS is done using data-driven methods such as data flow diagrams. Role Activity Diagrams provide not only a socio-technical method for analysing a particular systems development problem, but they also offer a process-based approach for capturing workflows and their associated information flows, and facilitate communication between analysts and users in an intuitive fashion. In particular, they elicit the important roles in a process and the interaction and collaboration required to achieve the goals of the process. The process approach has been applied in business information systems development. It is introduced here as a potential for systems development in the NHS.  相似文献   

12.
This paper addresses user modelling for “Design for All” in a model-based approach to Human-Computer Interaction, paying particular attention to placing user models within organisational role- and task-related contexts. After reviewing a variety of user modelling approaches, and deriving requirements for user modelling related to Design for All, the paper proposes a role-driven individualised approach. Such an approach is based on a model-based representation schema and a unifying notation that keeps the user’s models and the contextual information transparent and consistent. Individualisation is achieved by coupling symbolic model specifications with neural networking on synchronisation links between symbolic representation elements. As a result, user modelling for Design for All is achieved not by stereotypical user properties and functional roles, but by accommodating the actual users’ behaviour. Published online: 18 May 2001  相似文献   

13.
Knowledge-based systems for document analysis and understanding (DAU) are quite useful whenever analysis has to deal with the changing of free-form document types which require different analysis components. In this case, declarative modeling is a good way to achieve flexibility. An important application domain for such systems is the business letter domain. Here, high accuracy and the correct assignment to the right people and the right processes is a crucial success factor. Our solution to this proposes a comprehensive knowledge-centered approach: we model not only comparatively static knowledge concerning document properties and analysis results within the same declarative formalism, but we also include the analysis task and the current context of the system environment within the same formalism. This allows an easy definition of new analysis tasks and also an efficient and accurate analysis by using expectations about incoming documents as context information. The approach described has been implemented within the VOPR (VOPR is an acronym for the Virtual Office PRototype.) system. This DAU system gains the required context information from a commercial workflow management system (WfMS) by constant exchanges of expectations and analysis tasks. Further interaction between these two systems covers the delivery of results from DAU to the WfMS and the delivery of corrected results vice versa. Received June 19, 1999 / Revised November 8, 2000  相似文献   

14.
The most common way of designing databases is by means of a conceptual model, such as E/R, without taking into account other views of the system. New object-oriented design languages, such as UML (Unified Modelling Language), allow the whole system, including the database schema, to be modelled in a uniform way. Moreover, as UML is an extendable language, it allows for any necessary introduction of new stereotypes for specific applications. Proposals exist to extend UML with stereotypes for database design but, unfortunately, they are focused on relational databases. However, new applications require complex objects to be represented in complex relationships, object-relational databases being more appropriate for these requirements. The framework of this paper is an Object-Relational Database Design Methodology, which defines new UML stereotypes for Object-Relational Database Design and proposes some guidelines to translate a UML conceptual schema into an object-relational schema. The guidelines are based on the SQL:1999 object-relational model and on Oracle8i as a product example. Initial submission: 22 January 2002 / Revised submission: 10 June 2002 Published online: 7 January 2003 This paper is a revised and extended version of Extending UML for Object-Relational Database Design, presented in the UML’2001 conference [17].  相似文献   

15.
16.
In many applications, especially from the business domain, the requirements specification mainly deals with use cases and class models. Unfortunately, these models are based on different modelling techniques and aim at different levels of abstraction, such that serious consistency and completeness problems are induced. To overcome these deficiencies, we refine activity graphs to meet the needs for a suitable modelling element for use case behaviour. The refinement in particular supports the proper coupling of use cases via activity graphs and the class model. The granularity and semantics of our approach allow for a seamless, traceable transition of use cases to the class model and for the verification of the class model against the use case model. The validation of the use case model and parts of the class model is supported as well. Experience from several applications has shown that the investment in specification, validation and verification not only pays off during system and acceptance testing but also significantly improves the quality of the final product.    相似文献   

17.
Summary. In this paper we present a proof of the sequential consistency of the lazy caching protocol of Afek, Brown, and Merritt. The proof will follow a strategy of stepwise refinement, developing the distributed caching memory in five transformation steps from a specification of the serial memory, whilst preserving the sequential consistency in each step. The proof, in fact, presents a rationalized design of the distributed caching memory. We will carry out our proof using a simple process-algebraic formalism for the specification of the various design stages. We will not follow a strictly algebraic exposition, however. At some points the correctness will be shown using direct semantic arguments, and we will also employ higher-order constructs like action transducers to relate behaviours. The distribution of the design/proof over five transformation steps provides a good insight into the variations that could have been allowed at each point of the design while still maintaining sequential consistency. The design/proof in fact establishes the correctness of a whole family of related memory architectures. The factorization in smaller steps also allows for a closer analysis of the fairness assumptions about the distributed memory.  相似文献   

18.
The development of complex information systems calls for conceptual models that describe aspects beyond entities and activities. In particular, recent research has pointed out that conceptual models need to model goals, in order to capture the intentions which underlie complex situations within an organisational context. This paper focuses on one class of goals, namely non-functional requirements (NFR), which need to be captured and analysed from the very early phases of the software development process. The paper presents a framework for integrating NFRs into the ER and OO models. This framework has been validated by two case studies, one of which is very large. The results of the case studies suggest that goal modelling during early phases can lead to a more productive and complete modelling activity.    相似文献   

19.
This paper describes a complete stereovision system, which was originally developed for planetary applications, but can be used for other applications such as object modeling. A new effective on-site calibration technique has been developed, which can make use of the information from the surrounding environment as well as the information from the calibration apparatus. A correlation-based stereo algorithm is used, which can produce sufficient dense range maps with an algorithmic structure for fast implementations. A technique based on iterative closest-point matching has been developed for registration of successive depth maps and computation of the displacements between successive positions. A statistical method based on the distance distribution is integrated into this registration technique, which allows us to deal with such important problems as outliers, occlusion, appearance, and disappearance. Finally, the registered maps are expressed in the same coordinate system and are fused, erroneous data are eliminated through consistency checking, and a global digital elevation map is built incrementally.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号