首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 15 毫秒
1.
This paper focuses on a workflow distribution methodology for rationally deploying workflow models onto a distributed workflow system running on cloud computing environments, and we particularly lay a stress upon that those workflow systems operable on cloud computing environments are dubbed collaborative workflow systems, which are not only built upon the collaborative workflow architectures proposed in the paper, but pursuing the so-called collaborative computing paradigm characterized by focusing collaboration over cloud computing environments. The essential idea of the workflow distribution methodology is about how to fragment a workflow model and how to allocate its fragments to each of the architectural components configuring the underlying collaborative workflow architecture and system. As a reasonable solution to realize the essential idea, the paper proposes a model-driven workflow fragmentation framework, which provides a series of fragmentation algorithms that semantically fragmentate a workflow model by considering the semantic factors - performer, role, control-flow, data-flow, etc. - of the ICN-based workflow model as fragmentation criteria. The algorithms are classified into the vertical fragmentation approach, the horizontal fragmentation approach, and the hybrid approach of both. Conclusively, this paper conceives a possible set of collaborative workflow architectures embedding the collaborative computing paradigm, and describes the detailed formalism of the framework and about how the framework works on those collaborative workflow architectures and systems.  相似文献   

2.
Dealing with forward and backward jumps in workflow management systems   总被引:1,自引:0,他引:1  
Workflow management systems (WfMS) offer a promising technology for the realization of process-centered application systems. A deficiency of existing WfMS is their inadequate support for dealing with exceptional deviations from the standard procedure. In the ADEPT project, therefore, we have developed advanced concepts for workflow modeling and execution, which aim at the increase of flexibility in WfMS. On the one hand we allow workflow designers to model exceptional execution paths already at buildtime provided that these deviations are known in advance. On the other hand authorized users may dynamically deviate from the pre-modeled workflow at runtime as well in order to deal with unforeseen events. In this paper, we focus on forward and backward jumps needed in this context. We describe sophisticated modeling concepts for capturing deviations in workflow models already at buildtime, and we show how forward and backward jumps (of different semantics) can be correctly applied in an ad-hoc manner during runtime as well. We work out basic requirements, facilities, and limitations arising in this context. Our experiences with applications from different domains have shown that the developed concepts will form a key part of process flexibility in process-centered information systems. Received: 6 October 2002 / Accepted: 8 January 2003 Published online: 27 February 2003 This paper is a revised and extended version of [40]. The described work was partially performed in the research project “Scalability in Adaptive Workflow Management Systems” funded by the Deutsche Forschungsgemeinschaft (DFG).  相似文献   

3.
工作流管理系统异常处理的方法与层次   总被引:1,自引:0,他引:1  
工作流技术在信息处理领域的应用越来越受到重视,但应用中环境和用户要求的不断发展和变化需要工作流管理系统具有灵活的处理能力,工作流系统的异常处理正是要解决这种不断要求的变化。文章介绍了工作流异常处理的应用范围,总结了不同的应用方法,从系统的角度提出了工作流未来的异常处理层次,并在研究可适应性工作流技术方面进行了探讨。  相似文献   

4.
Flexible activity refinement plays an important role in improving process flexibility and addressing uncertainties of business processes. However, it is still a challenge to refine flexible activities, and the existing researches on flexible activity refinement such as the refinement principles and methods, and their combination with factors such as constraints and contexts is still lacking. Aiming at this, a novel dynamic refinement approach for flexible activities is proposed, which combines both vertical decomposition and horizontal extension refinements, with the impact of constraints and contexts considered. In particular, we summarize five typical refinement categories, and present a set of activity refinement rules based on them. Furthermore, the decomposition refinement, including the activity decomposition principles, the related rules for trigger event delivering and execution condition transferring is discussed in detail. The extension refinement, which realizes the horizontal refinement, can be integrated with other kinds of refinement and uses constraints to specify activity selection, activity temporal relationships, etc. Then, a tree-like activity refinement graph (ARG) is proposed to represent the refinement process, based on which the refinement cost and refinement degree can be computed to benefit the finding of the potential optimal refinement path. As a further implementation of the proposed refinement approach, a general refinement algorithm is described. Finally, a case study of urolithiasis therapy process and its application are given, and the results indicate the effectiveness of our proposals.  相似文献   

5.
Automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compelling case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. The paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.  相似文献   

6.
Many scientific workflows are data intensive: large volumes of intermediate datasets are generated during their execution. Some valuable intermediate datasets need to be stored for sharing or reuse. Traditionally, they are selectively stored according to the system storage capacity, determined manually. As doing science on clouds has become popular nowadays, more intermediate datasets in scientific cloud workflows can be stored by different storage strategies based on a pay-as-you-go model. In this paper, we build an intermediate data dependency graph (IDG) from the data provenances in scientific workflows. With the IDG, deleted intermediate datasets can be regenerated, and as such we develop a novel algorithm that can find a minimum cost storage strategy for the intermediate datasets in scientific cloud workflow systems. The strategy achieves the best trade-off of computation cost and storage cost by automatically storing the most appropriate intermediate datasets in the cloud storage. This strategy can be utilised on demand as a minimum cost benchmark for all other intermediate dataset storage strategies in the cloud. We utilise Amazon clouds’ cost model and apply the algorithm to general random as well as specific astrophysics pulsar searching scientific workflows for evaluation. The results show that benchmarking effectively demonstrates the cost effectiveness over other representative storage strategies.  相似文献   

7.
Security is increasingly critical for various scientific workflows that are big data applications and typically take quite amount of time being executed on large-scale distributed infrastructures. Cloud computing platform is such an infrastructure that can enable dynamic resource scaling on demand. Nevertheless, based on pay-per-use and hourly-based pricing model, users should pay attention to the cost incurred by renting virtual machines (VMs) from cloud data centers. Meanwhile, workflow tasks are generally heterogeneous and require different instance series (i.e., computing optimized, memory optimized, storage optimized, etc.). In this paper, we propose a security and cost aware scheduling (SCAS) algorithm for heterogeneous tasks of scientific workflow in clouds. Our proposed algorithm is based on the meta-heuristic optimization technique, particle swarm optimization (PSO), the coding strategy of which is devised to minimize the total workflow execution cost while meeting the deadline and risk rate constraints. Extensive experiments using three real-world scientific workflow applications, as well as CloudSim simulation framework, demonstrate the effectiveness and practicality of our algorithm.  相似文献   

8.
Forecasting workflow activity durations is of great importance to support satisfactory QoS in workflow systems. Traditionally, a workflow system is often designed to facilitate the process automation in a specific application domain where activities are of the similar nature. Hence, a particular forecasting strategy is employed by a workflow system and applied uniformly to all its workflow activities. However, with newly emerging requirement to serve as a type of middleware services for high performance computing infrastructures such as grid and cloud computing, more and more workflow systems are designed to be general purpose to support workflow applications from many different domains. Due to such a problem, the forecasting strategies in workflow systems must adapt to different workflow applications which are normally executed repeatedly such as data/computation intensive scientific applications (mainly with long-duration activities) and instance intensive business applications (mainly with short-duration activities). In this paper, with a systematic analysis of the above issues, we propose a novel statistical time-series pattern based interval forecasting strategy which has two different versions, a complex version for long-duration activities and a simple version for short-duration activities. The strategy consists of four major functional components: duration series building, duration pattern recognition, duration pattern matching and duration interval forecasting. Specifically, a novel hybrid non-linear time-series segmentation algorithm is designed to facilitate the discovery of duration-series patterns. The experimental results on real world examples and simulated test cases demonstrate the excellent performance of our strategy in the forecasting of activity duration intervals for both long-duration and short-duration activities in comparison to some representative time-series forecasting strategies in traditional workflow systems.  相似文献   

9.
One reason workflow systems have been criticized as being inflexible is that they lack support for delegation. This paper shows how delegation can be introduced in a workflow system by extending the role-based access control (RBAC) model. The current RBAC model is a security mechanism to implement access control in organizations by allowing users to be assigned to roles and privileges to be associated with the roles. Thus, users can perform tasks based on the privileges possessed by their own role or roles they inherit by virtue of their organizational position. However, there is no easy way to handle delegations within this model. This paper tries to treat the issues surrounding delegation in workflow systems in a comprehensive way. We show how delegations can be incorporated into the RBAC model in a simple and straightforward manner. The new extended model is called RBAC with delegation in a workflow context (DW-RBAC). It allows for delegations to be specified from a user to another user, and later revoked when the delegation is no longer required. The implications of such specifications and their subsequent revocations are examined. Several formal definitions for assertion, acceptance, execution and revocation are provided, and proofs are given for the important properties of our delegation framework.  相似文献   

10.
This paper presents some results of integrating predicate transition nets with first order temporal logic in the specification and verification of concurrent systems. The intention of this research is to use predicate transition nets as a specification method and to use first order temporal logic as a verification method so that their strengths — the easy comprehension of predicate transition nets and the reasoning power of first order temporal logic can be combined. In this paper, a theoretical relationship between the computation models of these two formalisms is presented; an algorithm for systematically translating a predicate transition net into a corresponding temporal logic system is outlined; and a special temporal refutation proof technique is proposed and illustrated in verifying various concurrent properties of the predicate transition net specification of the five dining philosophers problem.  相似文献   

11.
Computational workflows are a powerful paradigm to represent and manage complex applications, particularly in large-scale distributed scientific data analysis. Workflows represent application components that result in individual computations as well as their interdependences in terms of dataflow. Workflow systems use these representations to manage various aspects of workflow creation and execution for users, such as the automatic assignment of execution resources. This article describes an approach to automating a new aspect of the process: the selection of application components and data sources. We present a novel approach that enables users to specify varying degrees of detail and amount of constraints in a workflow request, including the specification of constraints on input, intermediate or output data in the workflow, abstract workflow component classes rather than specific component implementations, and generic reusable workflow templates that express a pre-defined combination of components. The algorithm elaborates the user request into a set of fully ground workflows with specific choices of data sources and codes to be used so that they can be submitted for mapping and execution. The algorithm searches through the space of possible candidate workflows by creating increasingly more specialized versions of the original template and eliminating candidates that violate constraints cumulated in the candidate workflow as components and data sources are selected. A novel feature of our approach is that it assumes a distributed architecture where data and component catalogues are separate from the workflow system. The algorithm explicitly poses queries to external catalogues, and therefore any reasoning regarding data or component properties is not assumed to occur within the workflow system. We describe our implementation of this approach in the Wings workflow system. This implementation uses the W3C Web Ontology Language and associated reasoners to implement the workflow system as well as the data and component catalogues. This research demonstrates the use of artificial intelligence techniques to support the kinds of automation envisioned by the scientific community for large-scale distributed scientific data analysis.  相似文献   

12.
The selection of the right I/O scheduler for a given workload can significantly improve the I/O performance. However, this is not an easy task because several factors should be considered, and even the “best” scheduler can change over the time, specially if the workload’s characteristics change too. To address this problem, we present a Dynamic and Automatic Disk Scheduling framework (DADS) that simultaneously compares two different Linux I/O schedulers, and dynamically selects that which achieves the best I/O performance for any workload at any time. The comparison is made by running two instances of a disk simulator inside the Linux kernel. Results show that, by using DADS, the performance achieved is always close to that obtained by the best scheduler. Thus, system administrators are exempted from selecting a suboptimal scheduler which can provide a good performance for some workloads, but may downgrade the system throughput when the workloads change.  相似文献   

13.
The increasing complexity of heterogeneous systems-on-chip, SoC, and distributed embedded systems makes system optimization and exploration a challenging task. Ideally, a designer would try all possible system configurations and choose the best one regarding specific system requirements. Unfortunately, such an approach is not possible because of the tremendous number of design parameters with sophisticated effects on system properties. Consequently, good search techniques are needed to find design alternatives that best meet constraints and cost criteria. In this paper, we present a compositional design space exploration framework for system optimization and exploration using SymTA/S, a software tool for formal performance analysis. In contrast to many previous approaches pursuing closed automated exploration strategies over large sets of system parameters, our approach allows the designer to effectively control the exploration process to quickly find good design alternatives. An important aspect and key novelty of our approach is system optimization with traffic shaping.  相似文献   

14.
Complex network is graph network with non-trivial topological features often occurring in real systems, such as video monitoring networks, social networks and sensor networks. While there is growing research study on complex networks, the main focus has been on the analysis and modeling of large networks with static topology. Predicting and control of temporal complex networks with evolving patterns are urgently needed but have been rarely studied. In view of the research gaps we are motivated to propose a novel end-to-end deep learning based network model, which is called temporal graph convolution and attention (T-GAN) for prediction of temporal complex networks. To joint extract both spatial and temporal features of complex networks, we design new adaptive graph convolution and integrate it with Long Short-Term Memory (LSTM) cells. An encoder-decoder framework is applied to achieve the objectives of predicting properties and trends of complex networks. And we proposed a dual attention block to improve the sensitivity of the model to different time slices. Our proposed T-GAN architecture is general and scalable, which can be used for a wide range of real applications. We demonstrate the applications of T-GAN to three prediction tasks for evolving complex networks, namely, node classification, feature forecasting and topology prediction over 6 open datasets. Our T-GAN based approach significantly outperforms the existing models, achieving improvement of more than 4.7% in recall and 25.1% in precision. Additional experiments are also conducted to show the generalization of the proposed model on learning the characteristic of time-series images. Extensive experiments demonstrate the effectiveness of T-GAN in learning spatial and temporal feature and predicting properties for complex networks.  相似文献   

15.
This paper examines social behavior in the online video game World of Warcraft. Specifically focusing on one element of social design: the behavior of players in the first release of Looking-for-Raid (LFR) loot system of World of Warcraft. It uses lens of economic game theory, combined with Williams (2010) mapping principle and a modern theoretical account of human decision-making, to explore how theory about individual interactions in well-defined contexts (games) can explain collective behavior. It provides some support for this theoretical approach with an examination of data collected as part of an ethnographic study, through focus groups, and a survey distributed to 333 World of Warcraft players. It concludes with a discussion of the results and some guidelines for predicting collective outcomes in certain types of online games using the introduced framework.  相似文献   

16.
This paper presents a quantitative framework for early prediction of resource usage and load in distributed real-time systems (DRTS). The prediction is based on an analysis of UML 2.0 sequence diagrams, augmented with timing information, to extract timed-control flow information. It is aimed at improving the early predictability of a DRTS by offering a systematic approach to predict, at the design phase, system behavior in each time instant during its execution. Since behavioral models such as sequence diagrams are available in early design phases of the software life cycle, the framework enables resource analysis at a stage when design decisions are still easy to change. Though we provide a general framework, we use network traffic as an example resource type to illustrate how the approach is applied. We also indicate how usage and load analysis of other types of resources (e.g., CPU and memory) can be performed in a similar fashion. A case study illustrates the feasibility of the approach.
Yvan LabicheEmail:
  相似文献   

17.
This study aims to present a fault detection and isolation (FDI) framework based on the marginalized likelihood ratio (MLR) approach using uniform priors for fault magnitudes in sensors and actuators. The existing methods in the literature use either flat priors with infinite support or the Gamma distribution as priors for the fault magnitudes. In the current study, it is assumed that the fault magnitude is a realization of a uniform prior with known upper and lower limits. The method presented in this study performs detection of time of occurrence of the fault and isolation of the fault type simultaneously while the estimation of the fault magnitude is achieved using a least squares based approach. The newly proposed method is evaluated by application to a benchmark CSTR problem using Monte Carlo simulations and the results reveal that this method can estimate the time of occurrence of the fault and the fault magnitude more accurately compared to a generalized likelihood ratio (GLR) based approach applied to the same benchmark problem. Simulation results on a benchmark problem also show significantly lower misclassification rates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号