首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Workflow Management Systems (WFMS) coordinate execution of logically related multiple tasks in an organization. Each workflow that is executed on such a system is an instance of some workflow schema. A workflow schema is defined using a set of tasks that are coordinated using dependencies. Workflows generated from the same schema may differ with respect to the tasks executed. An important issue that must be addressed while designing a workflow is to decide what tasks are needed for the workflow to complete — we refer to this set of tasks as the completion set. Since different tasks are executed in different workflow instances, a workflow schema may be associated with multiple completion sets. Incorrect specification of completion sets may prohibit some workflow from completing. This, in turn, will cause the workflow to hold on to the resources and raise availability problems. Manually generating these sets for large workflow schemas can be an error-prone and tedious process.Our goal is to automate this process. We investigate the factors that affect the completion of a workflow. Specifically, we study the impact of control-flow dependencies on completion sets and show how this knowledge can be used for automatically generating these sets. We provide an algorithm that can be used by application developers to generate the completion sets associated with a workflow schema. Generating all possible completion sets for a large workflow is computationally intensive. Towards this end, we show how to approximately estimate the number of completion sets. If this number exceeds some threshold specified by the user, then we do not generate all completion sets.  相似文献   

2.
In this paper, we study the problem of optimizing the throughput of coarse-grain workflow applications, for which each task of the workflow is of a given type, and subject to failures. The goal is to map such an application onto a heterogeneous specialized platform, which consists of a set of processors that can be specialized to process one type of tasks. The objective function is to maximize the throughput of the workflow, i.e., the rate at which the data sets can enter the system. If there is exactly one task per processor in the mapping, then we prove that the optimal solution can be computed in polynomial time. However, the problem becomes NP-hard if several tasks can be assigned to the same processor. Several polynomial time heuristics are presented for the most realistic specialized setting, in which tasks of the same type can be mapped onto the same processor, but a processor cannot process two tasks of different types. Also, we give an integer linear program formulation of this problem, which allows us to find the optimal solution (in exponential time) for small problem instances. Experimental results show that the best heuristics obtain a good throughput, much better than the throughput obtained with a random mapping. Moreover, we obtain a throughput close to the optimal solution in the particular cases on which the optimal throughput can be computed (small problem instances or particular mappings).  相似文献   

3.
Modeling and Analysis of Workflows Using Petri Nets   总被引:37,自引:0,他引:37  
A workflow system, in its general form, is basically a heterogeneous and distributed information system where the tasks are performed using autonomous systems. Resources, such as databases, labor, etc. are typically required to process these tasks. Prerequisite to the execution of a task is a set of constraints that reflect the applicable business rules and user requirements.In this paper we present a Petri Net (PN) based framework that (1) facilitates specification of workflow applications, (2) serves as a powerful tool for modeling the system under study at a conceptual level, (3) allows for a smooth transition from the conceptual level to a testbed implementation and (4) enables the analysis, simulation and validation of the system under study before proceeding to implementation. Specifically, we consider three categories of task dependencies: control flow, value and external (temporal).We identify several structural properties of PN and demonstrate their use for conducting the following type of analyses: (1) identify inconsistent dependency specifications among tasks; (2) test for workflow safety, i.e. test whether the workflow terminates in an acceptable state; (3) for a given starting time, test whether it is feasible to execute a workflow with the specified temporal constraints. We also provide an implementation for conducting the above analyses.  相似文献   

4.
面向云制造的按需工作流任务分配方法   总被引:2,自引:0,他引:2  
复杂大型的云制造应用需要通过既相互独立又能进行相互配合的多个制造云来协同完成,导致同一个工作流中的任务可在多个分布的工作流引擎中完成.为此,提出了一种"按需分配"的工作流任务分配方法,将任务分配贯穿于业务过程模型实例化过程,详细分析了云制造环境下影响任务分配的4种主要因素;为提高任务分配过程的自动化程度,给出了每种因素与任务需求的匹配方法.仿真实验结果表明了该方法的有效性.  相似文献   

5.
Many working processes are complex and composed by heterogeneous atomic tasks, e.g. editing, assembling data from different sources (as databases or laboratory's devices) with texts, images or learning objects, or submitting them to software components to retrieve information, to render them, re-format, submit to computations, and other types of information processing. All these processes heavily require procedural knowledge which is tacit as owned by experts of the working activity; they are complex and are extremely difficult to be modeled and automatized without having a flexible, multimodular evolutionary system in place. Support to information from different modalities increases the performance of a computer system originally designed for a task with a unimodular nature. In this paper, we discuss the idea of task management system (TMS) as a component-based system which offers a virtual workbench to search, acquire, describe and assemble computational agents performing single autonomous tasks into working processes. We sustain that TMS is a cutting edge platform to develop software solutions for problems related to workflow automatization and design. The architecture we propose follows the conceptual track of the TMS to allow composition and arrangement of atomic modules into a complex system. A configuration of the workflow can be implemented and extended with a set of task/components, chunks of activities which are considered basic elements of the workflow. By interacting with the TMS in editing mode, the workflow designer selects the relevant chunks from system repositories, drags them into a working system area and assembles them into a working process. As the main actor of the system, the workflow designer is provided with an environment resembling an artisan’s workshop, to let her/him select the relevant chunks from system repositories, drags them into a working area and assembles them into a working TMS instance, which represents the working process. Global interaction modality of the TMS instance is moulded and specialized on the base of the specific modalities of the task/components which have been retrieved from the system repositories and each time negotiated. Complex activities could be formally described, implemented and applied with a consequent advantage for personnel re-organization toward more conceptual activities.  相似文献   

6.
Automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compelling case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. The paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.  相似文献   

7.
基于任务-角色的访问控制模型   总被引:10,自引:0,他引:10       下载免费PDF全文
本文介绍了一种称为基于任务一角色的访问控制机制。它从工作流中的任务角度建模,可以依据任务和任务状态的不同,对权限进行动态管理;阐述了任务和任务实例之间的关系,以任务为中介将角色和权限关联起来,并对其模型进行了形式化描述和分析。  相似文献   

8.
基于OGSA网格的分层式网格任务调度器设计   总被引:1,自引:0,他引:1  
文章根据网格任务调度的需求、网格任务调度的特点,在充分分析一般网格任务调度的过程等的基础上,另外考虑到了网格计算环境的一些特点,比如虚拟化、分层次及自治的本质特征,以及在工作流任务协同需求下网格任务的资源依赖、粗粒度、重复执行等特性的前提下,改进设计了一种网格工作流任务主从式分层调度模型,并给出了调度策略和调度算法实现。该调度器模型在实际的网格工作流任务协同系统中得到了较好的应用效果。  相似文献   

9.
Individuals participating in technologically mediated forms of organization often have difficulty recognizing when groups emerge, and how the groups they take part in evolve. This paper contributes an analytical framework that improves awareness of these virtual group dynamics through analysis of electronic trace data from tasks and interactions carried out by individuals in systems not explicitly designed for context adaptivity, user modeling or user personalization. We discuss two distinct cases to which we have applied our analytical framework. These two cases provide a useful contrast of two prevalent ways for analyzing social relations starting from electronic trace data: either artifact-mediated or direct person-to-person interactions. Our case study integrates electronic trace data analysis with analysis of other, triangulating data specific to each application. We show how our techniques fit in a general model of group informatics, which can serve to construct group context, and be leveraged by future tool development aimed at augmenting context adaptivity with group context and a social dimension. We describe our methods, data management strategies and technical architecture to support the analysis of individual user task context, increased awareness of group membership, and an integrated view of social, information and coordination contexts.  相似文献   

10.
气候系统模式的不确定性量化分析包含复杂的工作流执行流程。对其中量化分析过程和模式后处理任务两个阶段的工作流进行了分析,设计和实现了针对该分析工作的工作流执行平台。该平台能够自动化执行典型的气候系统仿真实验与分析的工作流程,支持通过专家知识灵活配置工作流。平台中包含工作流层面的容错设计,支持通过分布式并发执行来加速工作流中的大计算量任务。同时,平台支持用户自定义插件的接入,增强了工作流系统的模块化和规范化。使用设计的平台对GAMIL2气候系统模式进行了物理过程参数不确定性的分析实验,验证了本平台的可行性和有效性。  相似文献   

11.
总结了现代产品开发过程的特点和存在的问题,分析了开发过程中的任务及任务之间的关系,提出了产品开发中的过程控制规则,讨论了路由规则和任务约束,给出了工作流模型完整性的检测算法。通过开发任务的逐级灵活分解,降低了开发过程管理的复杂性,给出了基于任务分解的任务流控制模型和算法。介绍了一个柔性开发过程控制模型实例。  相似文献   

12.
Quality of service (QoS) of workflows and workflow‐based applications is given increasing attention by both industry and academic. In this paper, we propose a novel analytical framework to analyze QoS (metrics include make‐span, cost, and reliability) of workflow systems based on GWF‐net, which extends traditional workflow net by associating tasks with generally distributed firing delay and time‐to‐failure. The GFW‐net model is used to model process structure and task organization of workflows at the process level. In contrast with prevailing QoS models based on Markovian process, we introduce a reduction technique to evaluate QoS of GWF‐net process avoiding the state‐explosion problem and tedious mathematical derivation of state‐transition probabilities. Through a case study, we show that our framework is capable of modeling real‐world workflow‐based application effectively. Also, experiments and confidence‐interval analysis in the case study indicate that the reduction methods are verified by real results. We also compare our approach with related research in the text. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
As large-scale distributed systems gain momentum, the scheduling of workflow applications with multiple requirements in such computing platforms has become a crucial area of research. In this paper, we investigate the workflow scheduling problem in large-scale distributed systems, from the Quality of Service (QoS) and data locality perspectives. We present a scheduling approach, considering two models of synchronization for the tasks in a workflow application: (a) communication through the network and (b) communication through temporary files. Specifically, we investigate via simulation the performance of a heterogeneous distributed system, where multiple soft real-time workflow applications arrive dynamically. The applications are scheduled under various tardiness bounds, taking into account the communication cost in the first case study and the I/O cost and data locality in the second. The simulation results provide useful insights into the impact of tardiness bound and data locality on the system performance.  相似文献   

14.
Typical patterns of using scientific workflows include their periodical executions using a fixed set of computational resources. Using the statistics from multiple runs, one can accurately estimate task execution and communication times to apply static scheduling algorithms. Several workflows with known estimates could be combined into a set to improve the resulting schedule. In this paper, we consider the mapping of multiple workflows to partially available heterogeneous resources. The problem is how to fill free time windows with tasks from different workflows, taking into account users’ requirements of the urgency of the results of calculations. To estimate quality of schedules for several workflows with various soft deadlines, we introduce the unified metric incorporating levels of meeting constraints and fairness of resource distribution.The main goal of the work was to develop a set of algorithms implementing different scheduling strategies for multiple workflows with soft deadlines in a non-dedicated environment, and to perform a comparative analysis of these strategies. We study how time restrictions (given by resource providers and users) influence the quality of schedules, and which scheme of grouping and ordering the tasks is the most effective for the batched scheduling of non-urgent workflows. Experiments with several types of synthetic and domain-specific sets of multiple workflows show that: (i) the use of information about time windows and deadlines leads to the significant increase of the quality of static schedules, (ii) the clustering-based scheduling scheme outperforms task-based and workflow-based schemes. This was confirmed by an evaluation of studied algorithms on a basis of the CLAVIRE workflow management platform.  相似文献   

15.
The workflow scheduling problem has drawn a lot of attention in the research community. This paper presents a workflow scheduling algorithm, called granularity score scheduling (GSS), which is based on the granularity of the tasks in a given workflow. The main objectives of GSS are to minimize the makespan and maximize the average virtual machine utilization. The algorithm consists of three phases, namely B-level calculation, score adjustment and task ranking and scheduling. We simulate the proposed algorithm using various benchmark scientific workflow applications, i.e., Cybershake, Epigenomic, Inspiral and Montage. The simulation results are compared with two well-known existing workflow scheduling algorithms, namely heterogeneous earliest finish time and performance effective task scheduling, which are also applied in cloud computing environment. Based on the simulation results, the proposed algorithm remarkably demonstrates its performance in terms of makespan and average virtual machine utilization.  相似文献   

16.
The jABC is a framework for process modelling and execution according to the XMDD (eXtreme model-driven design) paradigm, which advocates the rigorous use of user-level models in the software development process and software life cycle. We have used the jABC in the domain of scientific workflows for more than a decade now—an occasion to look back and take stock of our experiences in the field. On the one hand, we discuss results from the analysis of a sample of nearly 100 scientific workflow applications that have been implemented with the jABC. On the other hand, we reflect on our experiences and observations regarding the workflow development process with the framework. We then derive and discuss ongoing further developments and future perspectives for the framework, all with an emphasis on simplicity for end users through increased domain specificity. Concretely, we describe how the use of the PROPHETS synthesis plugin can enable a semantics-based simplification of the workflow design process, how with the jABC4 and DyWA frameworks more attention is paid to the ease of data management, and how the Cinco SCCE Meta-Tooling Suite can be used to generate tailored workflow management tools.  相似文献   

17.
基于对象模型工作流的失败处理与失败恢复   总被引:11,自引:0,他引:11  
高军  王海洋 《软件学报》2001,12(5):776-782
工作流的失败处理和失败恢复是工作流管理系统的重要组成部分.提出了一种新的基于对象的工作流模型,在此模型下设计了工作流失败处理和失败恢复的策略.与传统方法相比,新策略考虑了工作步骤之间的控制依赖和数据依赖,并且对工作步骤应用的具体实现方式进行了探讨,以提高工作流失败处理和失败恢复的效率.  相似文献   

18.
在生产实际中,一个新的任务通常和已有任务存在一定的联系。迁移学习方法可以将已有数据集中的有用信息,迁移到新的任务,以减少重新建模过程中大量的时间和费用消耗。然而,由于任务之间的分布差异,在异构环境下如何避免负面迁移问题,仍未得到有效的解决。除了要衡量数据间的相似性,还需要衡量实例间的相关性,而大多数传统方法仅在一个层面进行操作。提出了基于压缩编码的迁移学习方法(TLCC),建立了两个层面的算法模型,具体来说,在数据层面,数据间的相似性可以表示为超平面分类器的编码长度,而在实例层面,通过进一步挑选出有价值的实例进行迁移,提升算法性能,避免负面迁移的发生。实验结果表明,提出的算法相比其他算法具有明显的优势,在噪声环境下也有较高的准确度。  相似文献   

19.
为实现工作流管理系统中的任务调度和时间管理,避免流程在多任务运转时产生溢出,提高流程的工作效率。采用不固定时延定义了着色时间Petri网,通过控制任务间的最小时距避免了溢出,并用任务监测器实现了相应的控制策略。以各任务间的时间间隔最小为优化目标,对串行、并行、条件选择和循环四种基本着色时间工作流网进行了时序分析和任务调度,推导出多任务在基本着色时间工作流网调度的数学模型和着色时间工作流网整体运行时间函数的计算公式。最后通过一个审批流程对论述的任务调度方法进行了验证。  相似文献   

20.
Workflow management is concerned with automated support for business processes.Workflow management systems are driven by process models specifying the tasks that need to be executed,the order in which they can be executed,which resources are authorised to perform which tasks,and data that is required for,and produced by,these tasks.As workflow instances may run over a sustained period of time,it is important that workflow specifications be checked before they are deployed.Workflow verification is usually concerned with control-flow dependencies only;however,transition conditions based on data may further restrict possible choices between tasks.In this paper we extend workflow nets where transitions have concrete conditions associated with them,called WTC-nets.We then demonstrate that we can determine which execution paths of a WTC-net that are possible according to the control-flow dependencies,are actually possible when considering the conditions based on data.Thus,we are able to more accurately determine at design time whether a workflow net with transition conditions is sound.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号