首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
工作流建模阶段的验证工作对工作流的成功执行具有重要意义.首先分析了与工作流执行密切相关的资源和时间特性,考虑到工作流应用系统中总是有若干工作流并发执行,于是综合考虑结构、时间、资源限制三个层面,提出一个工作流应用系统一般性限制框架.基于该框架提出了多工作流网原型,并进一步给出了多工作流网下资源冲突和可调度性的概念,最后给出了多工作流网可调度性验证算法及其冲突解决方案.  相似文献   

2.
针对传统工作流灵活性和适应性差的问题,在WfMS参考模型的基础上,提出了一个基于Agent的、易于扩展和移植的工作流管理模型。实例分析表明,将Agent与传统工作流模型相结合,提高了工作流的学习能力,解决了工作流的资源冲突问题。  相似文献   

3.
网格环境的动态性使得科学工作流执行过程中的资源访问控制成为一个重要的研究课题.因此,提出一种基于上下文感知的资源访问控制机制,对科学工作流的任务上下文及其约束进行了分析和定义.描述了基于上下文感知的资源访问控制算法,并在此基础上设计了基于上下文感知的科学工作流管理系统框架.最后,通过天气预报这个科学工作流实例验证了该算法.  相似文献   

4.
当前工作流系统普遍缺乏柔性,导致动态适应性和实用性较差.本文结合本体技术讨论了工作流柔性,提出一个基于本体替换的柔性机制,在预定义的子工作流或资源无法得到时,寻找到可替代的子工作流或资源.将该机制应用到迁移工作流模型中,提出柔性迁移工作流系统框架,使工作流系统从建模和执行都具有良好的柔性和适应性,同时降低复杂性.该机制有效实现了动态联盟工作流系统的重构和规模扩充.  相似文献   

5.
基于P2P的Web工作流管理系统体系结构研究*   总被引:3,自引:1,他引:2  
传统C/S体系结构的工作流系统通常存在服务器端的资源瓶颈,现有的少数几个基于P2P的工作流系统没有利用Web服务的优势。针对以上不足,提出了一个基于P2P网络的Web工作流管理系统体系结构。系统引入通知机制实现工作流的分布式管理,工作流中活动由Web服务实现,工作流成为一个能够在Internet上调用的服务。系统克服了中央服务器的缺陷,具有较强的适应性、扩展性。  相似文献   

6.
快速发展的移动代理和网格技术给工作流提供了新的集成各种分布式资源的计算环境。传统的工作流架构有很多弱点,比如在动态环境中缺少灵活性等。提出了一个集成Web Services,网格,工作流和移动代理技术的分布式工作流架构,可以通过移动代理间的通讯来达到工作流之间协同工作,而且这个架构增强了工作流的灵活性和可靠性,以这个新的模型为基础实现了一个具体的工作流工作站,且可实现系统的安全使用。  相似文献   

7.
针对实际的网格环境-Open Science Grid (OSG),提出了一个多阶段网格工作流调度机制,主要包括站点发现、站点初始评估以及站点动态评估和选择.通过基于时间序列的性能预测值评估各资源站点的初始性能,提出了一个基于网格资源站点自适应评分机制的选择算法.为了提高工作流执行的可靠性并尽可能缩短执行时间,设计了一个增量式的任务副本策略,并采用各资源站点任务排队等待时间的经验累积分布函数图来优化任务副本的设置参数.在实际网格环境OSG中,基于网格工作流系统Swift完成的大量实验结果表明,所提出的算法和策略能够有效减小工作流调度长度和作业拒绝率,同时在OSG中能够成功完成的Swift工作流规模也明显增大.  相似文献   

8.
传统的网格工作流模型中分布式工作流管理器之间没有合作,因此可能发生源调度冲突问题,另外,在现有的工作流调度算法中,参与工作流调度的工作流管理器依托于集中或半集中的层次式的资源信息服务体系,导致系统的扩展性差.为了解决这些问题,在文中,提出了一个分布式的协同工作流调度算法.该算法基于二维协调空间来管理网格中的工作流管理器.二维协调空间负责资源发现和协调调度等功能.该算法不仅可以避免性能瓶颈,而且可以增强系统的可扩展性和自主性.  相似文献   

9.
工作流管理系统通常对工作流模型进行严格的定义.然而,现实情况却是工作流实例在运行过程中常常由于信息不足或者需要的资源不可用等诸多原因,而偏离预先的定义.因此,如何解决工作流实例运行过程中的路径选择问题成为当前研究的热点.本文以前人的研究成果为基础,提出了一个基于Ontology的柔性工作流执行框架.  相似文献   

10.
工作流模型的研究与实现   总被引:3,自引:0,他引:3  
本文介绍工作流以及工作流建模的一些基本概念,以及Petri网技术在工作流模型中的应用,同时提出了一个工作流建模的具体实现方法。本工作流模型的实现包括了企业人员、资源以及任务信息模型的实现,同时任务模型中包括了任务之间的约束关系,以及任务执行过程中产生的文档。  相似文献   

11.
12.
Large-scale applications can be expressed as a set of tasks with data dependencies between them, also known as application workflows. Due to the scale and data processing requirements of these applications, they require Grid computing and storage resources. So far, the focus has been on developing easy to use interfaces for composing these workflows and finding an optimal mapping of tasks in the workflow to the Grid resources in order to minimize the completion time of the application. After this mapping is done, a workflow execution engine is required to run the workflow over the mapped resources. In this paper, we show that the performance of the workflow execution engine in executing the workflow can also be a critical factor in determining the workflow completion time. Using Condor as the workflow execution engine, we examine the various factors that affect the completion time of a fine granularity astronomy workflow. We show that changing the system parameters that influence these factors and restructuring the workflow can drastically reduce the completion time of this class of workflows. We also examine the effect on the optimizations developed for the astronomy application on a coarser granularity biology application. We were able to reduce the completion time of the Montage and the Tomography application workflows by 90% and 50%, respectively.  相似文献   

13.
The ability to support Quality of Service (QoS) constraints is an important requirement in some scientific applications. With the increasing use of Cloud computing infrastructures, where access to resources is shared, dynamic and provisioned on-demand, identifying how QoS constraints can be supported becomes an important challenge. However, access to dedicated resources is often not possible in existing Cloud deployments and limited QoS guarantees are provided by many commercial providers (often restricted to error rate and availability, rather than particular QoS metrics such as latency or access time). We propose a workflow system architecture which enforces QoS for the simultaneous execution of multiple scientific workflows over a shared infrastructure (such as a Cloud environment). Our approach involves multiple pipeline workflow instances, with each instance having its own QoS requirements. These workflows are composed of a number of stages, with each stage being mapped to one or more physical resources. A stage involves a combination of data access, computation and data transfer capability. A token bucket-based data throttling framework is embedded into the workflow system architecture. Each workflow instance stage regulates the amount of data that is injected into the shared resources, allowing for bursts of data to be injected while at the same time providing isolation of workflow streams. We demonstrate our approach by using the Montage workflow, and develop a Reference net model of the workflow.  相似文献   

14.
王健  崔凯  刘美 《计算机时代》2014,(10):44-45
当前新疆的影片翻译制作过程普遍存在着工作流程复杂、媒体资源繁杂、影视资料无法集中管理等问题,为解决这些问题,设计了一套基于工作流引擎的民文影视资产管理系统。该系统的应用很好地将影片译制过程中的媒体资源与具体工作流程结合在一起,实现了媒体资源的统一管理,更好地为珍贵影视资料的保存和传承提供了保证。  相似文献   

15.
In the last years, scientific workflows have emerged as a fundamental abstraction for structuring and executing scientific experiments in computational environments. Scientific workflows are becoming increasingly complex and more demanding in terms of computational resources, thus requiring the usage of parallel techniques and high performance computing (HPC) environments. Meanwhile, clouds have emerged as a new paradigm where resources are virtualized and provided on demand. By using clouds, scientists have expanded beyond single parallel computers to hundreds or even thousands of virtual machines. Although the initial focus of clouds was to provide high throughput computing, clouds are already being used to provide an HPC environment where elastic resources can be instantiated on demand during the course of a scientific workflow. However, this model also raises many open, yet important, challenges such as scheduling workflow activities. Scheduling parallel scientific workflows in the cloud is a very complex task since we have to take into account many different criteria and to explore the elasticity characteristic for optimizing workflow execution. In this paper, we introduce an adaptive scheduling heuristic for parallel execution of scientific workflows in the cloud that is based on three criteria: total execution time (makespan), reliability and financial cost. Besides scheduling workflow activities based on a 3-objective cost model, this approach also scales resources up and down according to the restrictions imposed by scientists before workflow execution. This tuning is based on provenance data captured and queried at runtime. We conducted a thorough validation of our approach using a real bioinformatics workflow. The experiments were performed in SciCumulus, a cloud workflow engine for managing scientific workflow execution.  相似文献   

16.
The correctness of a workflow specification is critical for the automation of business processes. Therefore, errors in the specification should be detected and corrected at build-time. In this paper, we present a conflict verification and resolution approach for a kind of workflow constrained by resources and non-determined duration based on Petri net. In this kind of workflow, there are two timing functions for each activity to present the minimum and maximum duration of each activity, and the implementations of some activities require resources. Based on the Petri net model obtained, the earliest time to start each activity can be calculated and the key activities influencing the implementation of the workflow can be determined, with which the resource consistency between activities can be verified. Key-activity and waiting-short priority strategies are adopted to remove the resource conflicts between activities, which can ensure that most of the subsequent activities start as early as possible and that the whole workflow be finished in a shorter time. Through experiments, it is proved that the proposed removal strategy for resource conflicts is better than other strategies.  相似文献   

17.
基于着色Petri网的工作流模型研究   总被引:1,自引:0,他引:1  
针对传统Petri网建模方法的不足,研究了通过资源结构建模的工作流建模方法.在着色Petri网的基础上提出资源/任务网(R/T-net),并给出基于R/T-net的工作流建模过程.R/T模型能够有效地实现产品数据结构和过程结构的统一,资源流对任务流的控制以及模型的仿真.  相似文献   

18.
Time and resource management and verification are two important aspects of workflow management systems. In this paper, we present a modeling and analysis approach for a kind of workflow constrained by resources and nondetermined time based on Petri nets. Different from previous modeling approaches, there are two kinds of places in our model to represent the activities and resources of a workflow, respectively. For each activity place, its input and output transitions represent the start and termination of the activity, respectively, and there are two timing functions in it to define the minimum and maximum duration times of the corresponding activity. Using the constructed Petri net model, the earliest and latest times to start each activity can be calculated. With the reachability graph of the Petri net model, the timing factors influencing the implementation of the workflow can be calculated and verified. In this paper, the sufficient conditions for the existence of the best implementation case of a workflow are proved, and the method for obtaining such an implementation case is presented. The obtained results will benefit the evaluation and verification of the implementation of a workflow constrained by resources and nondetermined time.  相似文献   

19.
20.
网格基础设施是目前科学工作流应用规划、部署和执行的主要支撑环境.然而由于网格资源的自治、动态及异构性,如何在保障用户QoS约束下有效调度科学工作流是一个研究热点.针对费用约束下的科学工作流调度问题,为了提高其执行的可靠性,本文使用随机服务模型描述资源节点的动态服务能力并考虑本地任务负载对资源执行性能的影响,给出一种资源可靠性的评估方法,在此基础上提出一种费用约束下的科学工作流可靠调度算法RSASW.仿真实验结果表明RSASW算法相对于GAIN3,GreedyTime-CD及PFAS算法,对工作流的执行具有很好的可靠性保障.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号