首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 875 毫秒
1.
At present, workflow management systems have not sufficiently dealt with the issues of time, involving time modelling at build-time and time management at run-time. They are lack of the ability to support the checking of temporal constraints at run-time. Although some approaches have been devised to tackle this problem, they are limited to a single workflow and use only static techniques to verify temporal constraints. In reality, there are multiple workflows executing concurrently in a workflow management system. There may well exist resource constraints between these concurrent workflows, which affect significantly the verification of temporal constraints at run-time. This paper proposes a novel approach for dynamic verification of temporal constraints for concurrent workflows. We first investigate resource constraints in workflow management systems, and then define concurrent workflow executions. Based on these definitions, we propose a verification method by analysing the temporal relationship and resource constraints between activities among concurrent workflows.  相似文献   

2.
The dynamic nature of events, in particular business processes, is a natural and accepted feature of today’s business environment. Therefore, workflow systems, if they are to successfully model portions of the real world, need to acknowledge the temporal aspect of business processes. This is particularly true for processes where any deviation from the prescribed model is either very expensive, dangerous or even illegal. Such processes include legal processes, airline maintenance or hazardous material handling. However, time modeling in workflows is still an open research problem. This paper proposes a framework for time modeling in production workflows. Relevant temporal constraints are presented, and rules for their verification are defined. Furthermore, to enable visualization of some temporal constraints, a concept of “duration space” is introduced. The duration algorithm which calculates the shortest/longest workflow instance is presented. It is a generalization of two categories of algorithms: the shortest-path partitioning algorithm and the Critical Path Method (CPM). Based on the duration algorithm, the verification algorithm is designed to check the consistency of introduced temporal constraints.  相似文献   

3.
柔性工作流在应对业务建模过程中的动态不确定因素、提高工作流系统的柔性具有巨大的优势,然而,柔性活动的动态细化一直是柔性工作流建模和应用的一个难点。因此,提出一种基于知识树和约束的柔性活动动态细化方法。该方法以知识树的包含和泛化关系作为启发信息,以活动选取约束和时序约束作为指导和校验,实现柔性活动的动态细化。在介绍了知识树及其蕴含关系以及活动选取约束和时序约束规则的基础上,给出了柔性活动的动态细化算法,描述了活动选取校验和时序约束校验算法。最后给出了算法的实现和实例分析,其结果表明了所提方法的有效性,能够很好地解决柔性活动的细化问题。  相似文献   

4.
工作流修正是工作流重用的重要任务.目前在基于工作流的可重用片段——stream的语义工作流修正中,当工作流stream库中不存在与检索语义工作流中的工作流stream结构相似的stream时,无法修正检索工作流.针对这种情况,提出了一种改进方案——基于stream行为特征修正语义工作流.使用任务紧邻关系集表达stream的行为特征.对于检索语义工作流中的每个与变更请求不一致的stream,使用锚集合数据索引和stream匹配规则对工作流stream库过滤得到候选匹配stream集;之后基于stream的行为相似性和变更请求对候选stream集进行验证,得到与变更请求一致程度最高和足够相似的匹配stream;然后更新变更请求,使用每个检索到的匹配stream替换原stream以逐步修正检索语义工作流中的缺陷;最后得到修正语义工作流.实验结果表明,与现有的基于工作流stream的修正算法相比,本文的算法得到了整体质量更好的修正语义工作流集,其适应性更好.该修正算法能为业务过程管理人员为适应新业务需求的工作流建模提供较好质量的修正语义工作流供参考,对提高工作流重用的效率和质量有较大帮助.  相似文献   

5.
延迟时间Petri网(Delay Time Petri Nets,DTPN)是一类重要的时间扩展Petri网系统,解决了其他时间扩展Petri网(如时间Petri网)在保存时间约束时所面临的困难。可调度验证的目的是验证工作流模型时间约束的合理性,对流程实例的时间可达性进行仿真。提出一种基于DTPN的时间约束工作流验证分析方法。给出了DTPN的相关定义,并结合工作流控制结构描述了变迁可触发的时间条件;提出了DTPN触发点的概念以及基于此的验证分析算法;简要分析了DTPN的特性。DTPN的研究丰富完善了现有时间Petri网体系,具有积极的意义。  相似文献   

6.
基于授权约束的工作流任务指派算法   总被引:2,自引:0,他引:2  
在电子政务、电子商务等工作流应用环境中,任务执行者主要是组织机构中的人,访问控制系统的授权约束需求是工作流任务指派要考虑的重要问题,已提出的许多工作流授权模型主要讨论授权的实现过程,而较少涉及授权的约束或依赖关系。工作流任务指派必须与访问控制系统结合,为此提出工作流任务指派中授权约束的相关概念,讨论了满足授权约束的工作流任务指派实现算法及算法的复杂度,同时,给出在实际应用中各系统部署的示意图及授权约束验证系统主要接口的功能;最后,指出了该实现方法的优点及未来研究方向。  相似文献   

7.
工作流建模阶段的验证工作对工作流的成功执行具有重要意义.首先分析了与工作流执行密切相关的资源和时间特性,考虑到工作流应用系统中总是有若干工作流并发执行,于是综合考虑结构、时间、资源限制三个层面,提出一个工作流应用系统一般性限制框架.基于该框架提出了多工作流网原型,并进一步给出了多工作流网下资源冲突和可调度性的概念,最后给出了多工作流网可调度性验证算法及其冲突解决方案.  相似文献   

8.
《Information Systems》2005,30(5):349-378
Workflow systems have traditionally focused on the so-called production processes which are characterized by pre-definition, high volume, and repetitiveness. Recently, the deployment of workflow systems in non-traditional domains such as collaborative applications, e-learning and cross-organizational process integration, have put forth new requirements for flexible and dynamic specification. However, this flexibility cannot be offered at the expense of control, a critical requirement of business processes.In this paper, we will present a foundation set of constraints for flexible workflow specification. These constraints are intended to provide an appropriate balance between flexibility and control. The constraint specification framework is based on the concept of “pockets of flexibility” which allows ad hoc changes and/or building of workflows for highly flexible processes. Basically, our approach is to provide the ability to execute on the basis of a partially specified model, where the full specification of the model is made at runtime, and may be unique to each instance.The verification of dynamically built models is essential. Where as ensuring that the model conforms to specified constraints does not pose great difficulty, ensuring that the constraint set itself does not carry conflicts and redundancy is an interesting and challenging problem. In this paper, we will provide a discussion on both the static and dynamic verification aspects. We will also briefly present Chameleon, a prototype workflow engine that implements these concepts.  相似文献   

9.
Computational workflows are a powerful paradigm to represent and manage complex applications, particularly in large-scale distributed scientific data analysis. Workflows represent application components that result in individual computations as well as their interdependences in terms of dataflow. Workflow systems use these representations to manage various aspects of workflow creation and execution for users, such as the automatic assignment of execution resources. This article describes an approach to automating a new aspect of the process: the selection of application components and data sources. We present a novel approach that enables users to specify varying degrees of detail and amount of constraints in a workflow request, including the specification of constraints on input, intermediate or output data in the workflow, abstract workflow component classes rather than specific component implementations, and generic reusable workflow templates that express a pre-defined combination of components. The algorithm elaborates the user request into a set of fully ground workflows with specific choices of data sources and codes to be used so that they can be submitted for mapping and execution. The algorithm searches through the space of possible candidate workflows by creating increasingly more specialized versions of the original template and eliminating candidates that violate constraints cumulated in the candidate workflow as components and data sources are selected. A novel feature of our approach is that it assumes a distributed architecture where data and component catalogues are separate from the workflow system. The algorithm explicitly poses queries to external catalogues, and therefore any reasoning regarding data or component properties is not assumed to occur within the workflow system. We describe our implementation of this approach in the Wings workflow system. This implementation uses the W3C Web Ontology Language and associated reasoners to implement the workflow system as well as the data and component catalogues. This research demonstrates the use of artificial intelligence techniques to support the kinds of automation envisioned by the scientific community for large-scale distributed scientific data analysis.  相似文献   

10.
提高科学工作流在云环境中的执行效率、降低执行费用受到广泛关注。用户期望的局部QoS约束与工作流的总体执行效率之间往往存在矛盾。针对该现象,在前期的研究基础上提出一种允许违反局部时间约束的科学工作流调度策略。通过对已聚簇的工作流任务集使用任务后向优先合并的方法,可实现任务间空闲时间片的合理利用,进而优化科学工作流的执行时间;另外,为充分利用任务的松弛时间,提高工作流的整体执行效率,允许部分任务的调度违反局部最晚完成时间的约束。实验结果表明,该策略能提前科学工作流的最早完成时间,提高处理机的利用率,并最终降低工作流的执行费用。  相似文献   

11.
科学与工程计算中的很多复杂应用问题需要使用科学工作流技术,超算领域中的科学工作流常以并行任务图建模,并行任务图的有效调度对应用的高效执行有重要意义。给出了资源限制条件下并行任务图的调度模型;针对Fork-Join类并行任务图给出了若干最优化调度结论;针对一般并行任务图提出了一种新的调度算法,该算法考虑了数据通信开销对资源分配和调度性能的影响,并对已有的CPA算法在特定情况下进行了改进。通过实验与常用的CPR和CPA算法做比较,验证了提出的新算法能够获得很好的调度效果。本文提出的调度算法和得到的最优调度结论对工作流应用系统的高性能调度功能开发具有借鉴意义。  相似文献   

12.
Scheduling, in many application domains, involves optimization of multiple performance metrics. For example, application workflows with real-time constraints have strict throughput requirements and also desire a low latency or response time. In this paper, we present a novel algorithm for the scheduling of workflows that act on a stream of input data. Our algorithm focuses on the two performance metrics, latency and throughput, and minimizes the latency of workflows while satisfying strict throughput requirements. We also describe steps to use the above approach to solve the problem of meeting latency requirements while maximizing throughput. We leverage pipelined, task and data parallelism in a coordinated manner to meet these objectives and investigate the benefit of task duplication in alleviating communication overheads in the pipelined schedule for different workflow characteristics. The proposed algorithm is designed for a realistic bounded multi-port communication model, where each processor can simultaneously communicate with at most k distinct processors. Experimental evaluation using synthetic benchmarks as well as those derived from real applications shows that our algorithm consistently produces lower latency schedules that meet throughput requirements, even when previously proposed schemes fail.  相似文献   

13.
基于混沌遗传算法的网格工作流调度应用   总被引:1,自引:0,他引:1  
动态网格环境中, 多QoS(服务质量)约束下的工作流调度问题是决定其任务执行成功与否及效率高低的关键。现有的网格工作流调度算法难以满足实际应用中的不同需求, 同时算法欠优化, 难以提供多种策略, 由此提出了一种基于期限与预算两个QoS约束的改进型混沌遗传算法。首先, 为避免算法出现收敛停滞将混沌机制引入遗传算法并对变异概率进行自适应处理。其次, 提出时间和预算的线性结合概念, 将目标函数转换为适应值函数。最终基于工作流调度中的平衡结构和非平衡结构测试了算法的有效性。  相似文献   

14.
In hierarchical organizations, hierarchical structures naturally correspond to nested sets. That is, we have a collection of sets such that for any two sets, either one of them is a subset of the other, or they are disjoint. In other words, a nested set system forms a hierarchy in the form of a tree structure. The task assignment problem on such hierarchical organizations is a real life problem. In this paper, we introduce the tree-like weighted set packing problem, which is a weighted set packing problem restricted to sets forming tree-like hierarchical structure. We propose a dynamic programming algorithm with cubic time complexity.  相似文献   

15.
Science gateways often rely on workflow engines to execute applications on distributed infrastructures. We investigate six software architectures commonly used to integrate workflow engines into science gateways. In tight integration, the workflow engine shares software components with the science gateway. In service invocation, the engine is isolated and invoked through a specific software interface. In task encapsulation, the engine is wrapped as a computing task executed on the infrastructure. In the pool model, the engine is bundled in an agent that connects to a central pool to fetch and execute workflows. In nested workflows, the engine is integrated as a child process of another engine. In workflow conversion, the engine is integrated through workflow language conversion. We describe and evaluate these architectures with metrics for assessment of integration complexity, robustness, extensibility, scalability and functionality. Tight integration and task encapsulation are the easiest to integrate and the most robust. Extensibility is equivalent in most architectures. The pool model is the most scalable one and meta-workflows are only available in nested workflows and workflow conversion. These results provide insights for science gateway architects and developers.  相似文献   

16.
任丰玲  于炯  杨兴耀 《计算机工程》2012,38(23):287-290
针对云计算环境下多个有向无环图(DAG)工作流的调度问题,提出一种基于最小化数据传输时间和任务完成时间(LTCT)的算法,用于处理具有相同优先级的多个DAG工作流之间的调度问题。在多个DAG优先级各不相同时的情况下,给出多优先级多DAG的混合调度算法。实验结果表明,LTCT算法较E-Fairness算法在保证多DAG调度公平性的基础上,能避免额外的数据传输开销,有利于缩短整个工作流的执行Makespan,提高资源的利用率。  相似文献   

17.
In this paper, a rotary chaotic particle swarm optimization (RCPSO) algorithm is presented to solve trustworthy scheduling of a grid workflow. In general, the grid workflow scheduling is a complex optimization problem which requires considering various scheduling criteria so as to meet a wide range of QoS requirements from users. Traditional researches into grid workflow scheduling mainly focus on the optimization constrained by time and cost. The key requirements for reliability, availability and security are not considered adequately. The main contribution of this study is to propose a new approach for trustworthy workflow scheduling in a large-scale grid with rich service resources, and present the RCPSO algorithm to optimize the scheduling performance in a multi-dimensional complex space. Experiments were done in two grid applications with at most 120 candidate services supplied to each task of various workflows. The results show better performance of the RCPSO in solving trustworthy scheduling of grid workflow problems as compared to GA, ACO and other recent variants of PSO.  相似文献   

18.
A science process is a process to solve complex scientific problems which usually have no mature solving methods. Science processes if modeled in workflow forms, i.e. scientific workflows, can be managed more effectively and performed more automatically. However, most current workflow models seldom take account of specific characteristics of science processes and are not very suitable for modeling scientific workflows. Therefore, a new workflow model named problem-based scientific workflow model (PBSWM) is proposed in this paper to accommodate those specific characteristics. Corresponding soundness verification and dynamic modification are discussed accordingly based on the new modelling method. This paper makes three main contributions: (1) three new constructs are proposed for special logic semantics in science processes; (2) verification is deployed with the consideration from both data-specific perspective and control-specific perspective; and (3) a set of rules are provided to automatically infer passive modifications caused by other modifications.  相似文献   

19.
A growing number of data- and compute-intensive experiments have been modeled as scientific workflows in the last decade. Meanwhile, clouds have emerged as a prominent environment to execute this type of workflows. In this scenario, the investigation of workflow scheduling strategies, aiming at reducing its execution times, became a top priority and a very popular research field. However, few work consider the problem of data file assignment when solving the task scheduling problem. Usually, a workflow is represented by a graph where nodes represent tasks and the scheduling problem consists in allocating tasks to machines to be executed at a predefined time aiming at reducing the makespan of the whole workflow. In this article, we show that the scheduling of scientific workflows can be improved when both task scheduling and the data file assignment problems are treated together. Thus, we propose a new workflow representation, where nodes of the workflow graph represent either tasks or data files, and define the Task Scheduling and Data Assignment Problem (TaSDAP), considering this new model. We formulated this problem as an integer programming problem. Moreover, a hybrid evolutionary algorithm for solving it, named HEA-TaSDAP, is also introduced. To evaluate our approach we conducted two types of experiments: theoretical and practical ones. At first, we compared HEA-TaSDAP with the solutions produced by the mathematical formulation and by other works from related literature. Then, we considered real executions in Amazon EC2 cloud using a real scientific workflow use case (SciPhy for phylogenetic analyses). In all experiments, HEA-TaSDAP outperformed the other classical approaches from the related literature, such as Min–Min and HEFT.  相似文献   

20.
Bag-of-Tasks (BoT) workflows are widespread in many big data analysis fields. However, there are very few cloud resource provisioning and scheduling algorithms tailored for BoT workflows. Furthermore, existing algorithms fail to consider the stochastic task execution times of BoT workflows which leads to deadline violations and increased resource renting costs. In this paper, we propose a dynamic cloud resource provisioning and scheduling algorithm which aims to fulfill the workflow deadline by using the sum of task execution time expectation and standard deviation to estimate real task execution times. A bag-based delay scheduling strategy and a single-type based virtual machine interval renting method are presented to decrease the resource renting cost. The proposed algorithm is evaluated using a cloud simulator ElasticSim which is extended from CloudSim. The results show that the dynamic algorithm decreases the resource renting cost while guaranteeing the workflow deadline compared to the existing algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号