首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 15 毫秒
1.
Workflow scheduling on parallel systems has long been known to be a NP-complete problem. As modern grid and cloud computing platforms emerge, it becomes indispensable to schedule mixed-parallel workflows in an online manner in a speed-heterogeneous multi-cluster environment. However, most existing scheduling algorithms were not developed for online mixed-parallel workflows of rigid data-parallel tasks and multi-cluster environments, therefore they cannot handle the problem efficiently. In this paper, we propose a scheduling framework, named Mixed-Parallel Online Workflow Scheduling (MOWS), which divides the entire scheduling process into four phases: task prioritizing, waiting queue scheduling, task rearrangement, and task allocation. Based on this framework, we developed four new methods: shortest-workflow-first, priority-based backfilling, preemptive task execution and All-EFT task allocation, for scheduling online mixed-parallel workflows of rigid tasks in speed-heterogeneous multi-cluster environments. To evaluate the proposed scheduling methods, we conducted a series of simulation studies and made comparisons with previously proposed approaches in the literature. The experimental results indicate that each of the four proposed methods outperforms existing approaches significantly and all these approaches in MOWS together can achieve more than 20% performance improvement in terms of average turnaround time.  相似文献   

2.
In this paper, we consider the communications involved by the execution of a complex application, deployed on a heterogeneous large-scale distributed platform. Such applications intensively use collective macro-communication schemes, such as scatters, personalized all-to-alls or gather/reduce operations. Rather than aiming at minimizing the execution time of a single macro-communication, we focus on the steady-state operation. We assume that there is a large number of macro-communications to perform in pipeline fashion, and we aim at maximizing the throughput, i.e., the (rational) number of macro-communications which can be initiated every time-step. We target heterogeneous platforms, modeled by a graph where resources have different communication and computation speeds. The situation is simpler for series of scatters or personalized all-to-alls than for series of reduces operations, because of the possibility of combining various partial reductions of the local values, and of interleaving computations with communications. In all cases, we show how to determine the optimal throughput, and how to exhibit a concrete periodic schedule that achieves this throughput.  相似文献   

3.
NextGen aviation will require an even greater reliance on automation than current‐day operations. Therefore, systems with problems in human–automation interaction must be identified and resolved early, well before they are introduced into operation. This paper describes a research and software development effort to build a prototype automation design advisor tool (ADAT) for flight deck automation. This tool uses models of human performance to identify perceptual, cognitive, and action‐related inefficiencies in the design of flight management systems. Aviation designers can use the tool to evaluate and compare potential flight deck automation designs and to identify potential human–automation interaction concerns. Designers can compare different flight management systems in terms of specific features and their ability to support pilot performance. ADAT provides specific, research‐based guidance for resolving problematic design issues. It was specifically designed to be flexible enough for both current‐day technologies and revolutionary NextGen designs. © 2012 Wiley Periodicals, Inc.  相似文献   

4.
In this study the influence of chair characteristics on comfort, discomfort, adjustment time and seat interface pressure is investigated during VDU and non-VDU tasks: The two investigated office chairs, both designed according to European and Dutch standards are different regarding: 1) seat cushioning and shape, 2) backrest angle and 3) controls. Thirty subjects in total, both male and female, participated in two experiments: twenty in the first and ten in the second.Significant differences are found for ease of adjustment and adjustment time of controls, independent of the tasks. Related to tasks, a significant difference was found for the backrest range of motion. For non-VDU tasks a larger range of backrest motion was preferred by 70% of the subjects. The chair design differences were most clear for comfort and adjustment time of controls, followed by comfort of backrest angle. No differences are found between seat pan comfort and discomfort, first impressions and peak interface pressure.  相似文献   

5.
In this paper, collaborative writing is modeled as a workflow application comprising a number of activities. Each activity is assigned to a co-author to edit certain part(s) of the shared document. The order of editing is governed by a workflow schedule, which can be modified during execution. To enable concurrent editing, a data definition scheme is proposed whereby documents are transformed into the XML format for internal storage. Effectively, a hierarchical structure is superimposed on each document, allowing it to be broken down into a number of independent parts. These parts may then be separately edited in parallel. Unlike other approaches, the system supports the use of off-the-shelf editor software. Users can thus adapt to the system easily and existing documents can also be re-used rapidly. A prototype has been built which supports collaborative editing of Microsoft WordTMTM documents.  相似文献   

6.
Optimizing disk storage to support statistical analysis operations   总被引:2,自引:0,他引:2  
Data stored in spreadsheets and relational database tables can be viewed as “worksheets” consisting of rows and columns, with rows corresponding to records. Correspondingly, the typical practice is to store the data on disk in row major order. While this practice is reasonable in many cases, it is not necessarily the best practice when computation is dominated by column-based statistics. This short note discusses the performance tradeoffs between row major and column major storage of data in the context of statistical data analysis. A comparison of a software package utilizing column major storage and one using row major storage confirms our results.  相似文献   

7.
The problem of determining whether a set of periodic tasks can be assigned to a set of heterogeneous processors without deadline violations has been shown, in general, to be NP-hard. This paper presents a new algorithm based on ant colony optimization (ACO) metaheuristic for solving this problem. A local search heuristic that can be used by various metaheuristics to improve the assignment solution is proposed and its time and space complexity is analyzed. In addition to being able to search for a feasible assignment solution, our extended ACO algorithm can optimize the solution by lowering its energy consumption. Experimental results show that both the prototype and the extended version of our ACO algorithm outperform major existing methods; furthermore, the extended version achieves an average of 15.8% energy saving over its prototype.  相似文献   

8.
Efficient rescue operations require a high level of situation awareness amongst decision‐makers and first responders for the purpose of achieving operations successfully and reducing losses. Moreover, a common operational picture between involved actors is required in order to support decision‐making. Therefore, different organisations and agencies have to collaborate, cooperate, and coordinate their actions with each other. Hence, effective interactions and communications between participants are vital to fulfil these essential needs. However, emergency actors still lack backing to exchange information effectively and ensure a common operational picture in order to reach shared situational awareness. For this reason, we aim to develop and implement Rescue MODES, a communication system oriented to support situation awareness amongst French emergency actors in rescue operations. In this paper, we examine and analyse actors’ activities and interactions, so that the system will be based on the real needs of actors. We start by studying and modelling the communications, interactions, and information flow. This modelling is based on an application ontology. Then, we define requirements for good communication in these operations and present some existing systems in France and how each system responds to these requirements.  相似文献   

9.
Users expect applications to successfully cope with the expansion of information as necessitated by the continuous inclusion of novel types of content. Given that such content may originate from ‘not‐seen thus far’ data collections and/or data sources, the challenging issue is to achieve the return of investment on existing services, adapting to new information without changing existing business‐logic implementation. To address this need, we introduce DOLAR (Data Object Language And Runtime), a service‐neutral framework which virtualizes the information space to avoid invasive, time‐consuming, and expensive source‐code extensions that frequently break applications. Specifically, DOLAR automates the introduction of new business‐logic objects in terms of the proposed virtual ‘content objects’. Such user‐specified virtual objects align to storage artifacts and help realize uniform ‘store‐to‐user’ data flows atop heterogeneous sources, while offering the reverse ‘user‐to‐store’ flows with identical effectiveness and ease of use. In addition, the suggested virtual object composition schemes help decouple business logic from any content origin, storage and/or structural details, allowing applications to support novel types of items without modifying their service provisions. We expect that content‐rich applications will benefit from our approach and demonstrate how DOLAR has assisted in the cost‐effective development and gradual expansion of a production‐quality digital library. Experimentation shows that our approach imposes minimal overheads and DOLAR‐based applications scale as well as any underlying datastore(s). Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
Nowadays, real-time embedded applications have to cope with an increasing demand of functionalities, which require increasing processing capabilities. With this aim real-time systems are being implemented on top of high-performance multicore processors that run multithreaded periodic workloads by allocating threads to individual cores. In addition, to improve both performance and energy savings, the industry is introducing new multicore designs such as ARM’s big.LITTLE that include heterogeneous cores in the same package.A key issue to improve energy savings in multicore embedded real-time systems and reduce the number of deadline misses is to accurately estimate the execution time of the tasks considering the supported processor frequencies. Two main aspects make this estimation difficult. First, the running threads compete among them for shared resources. Second, almost all current microprocessors implement Dynamic Voltage and Frequency Scaling (DVFS) regulators to dynamically adjust the voltage/frequency at run-time according to the workload behavior. Existing execution time estimation models rely on off-line analysis or on the assumption that the task execution time scales linearly with the processor frequency, which can bring important deviations since the memory system uses a different power supply.In contrast, this paper proposes the Processor–Memory (Proc–Mem) model, which dynamically predicts the distinct task execution times depending on the implemented processor frequencies. A power-aware EDF (Earliest Deadline First)-based scheduler using the Proc–Mem approach has been evaluated and compared against the same scheduler using a typical Constant Memory Access Time model, namely CMAT. Results on a heterogeneous multicore processor show that the average deviation of Proc–Mem is only by 5.55% with respect to the actual measured execution time, while the average deviation of the CMAT model is 36.42%. These results turn in important energy savings, by 18% on average and up to 31% in some mixes, in comparison to CMAT for a similar number of deadline misses.  相似文献   

11.
Role-based access control (RBAC) provides flexibility to security management over the traditional approach of using user and group identifiers. In RBAC, access privileges are given to roles rather than to individual users. Users acquire the corresponding permissions when playing different roles. Roles can be defined simply as a label, but such an approach lacks the support to allow users to automatically change roles under different contexts; using static method also adds administrative overheads in role assignment. In electronic commerce (E-Commerce) and other cooperative computing environments, access to shared resources has to be controlled in the context of the entire business process; it is therefore necessary to model dynamic roles as a function of resource attributes and contextual information.In this paper, an object-oriented organizational model, Organization Modeling and Management (OMM), is presented as an underlying model to support dynamic role definition and role resolution in E-Commerce solution. The paper describes the OMM reference model and shows how it can be applied flexibly to capture the different classes of resources within a corporation, and to maintain the complex and dynamic roles and relationships between the resource objects. Administrative tools use the role model in OMM to define security policies for role definition and role assignment. At runtime, the E-Commerce application and the underlying resource manager queries the OMM system to resolve roles in order to authorize any access attempts. Contrary to traditional approaches, OMM separates the organization model from the applications; thus, it allows independent and flexible role modeling to support realistically the dynamic authorization requirements in a rapidly changing business world.  相似文献   

12.
An approach for managing the exchange of engineering product data between geographically distributed designers and analysts using a heterogeneous tool set for the through-life design of a ship is described. The approach was developed within a pan-European maritime project called VRShips-ROPAX 2000 that demonstrated how information technology could be integrated into the design process. This paper describes the development of a common model containing neutral ship product data through a bottom-up consideration of the requirements of the tools to be integrated, as well as a top-down consideration of the data requirements for through life design. This common model was supported within an Integrated Design Environment (IDE) that co-ordinated design activity distributed across Europe. The IDE ensured that the users were provided with the right data in the right form at the right time to do the right task, i.e., that the design activity was timely and appropriate. The strengths and weaknesses of the approach are highlighted.  相似文献   

13.
In this paper, we present an approach to global transaction management in workflow environments. The transaction mechanism is based on the well-known notion of compensation, but extended to deal with both arbitrary process structures to allow cycles in processes and safepoints to allow partial compensation of processes. We present a formal specification of the transaction model and transaction management algorithms in set and graph theory, providing clear, unambiguous transaction semantics. The specification is straightforwardly mapped to a modular architecture, the implementation of which is first applied in a testing environment, then in the prototype of a commercial workflow management system. The modular nature of the resulting system allows easy distribution using middleware technology. The path from abstract semantics specification to concrete, real-world implementation of a workflow transaction mechanism is thus covered in a complete and coherent fashion. As such, this paper provides a complete framework for the application of well-founded transactional workflows. Received: 16 November 1999 / Accepted 29 August 2001 Published online: 6 November 2001  相似文献   

14.
This paper presents the Previse system for preparing and verifying operations procedures, developed in the frame of a project conducted by Matra Marconi Space for the European Space Agency ESA/ESTEC. This knowledge-based system supports an evolutive procedures language (syntax and vocabulary). It improves the efficiency of procedures preparation by providing powerful editing facilities, by checking the validity of the syntax used by procedures writers, and by automating the formatting task. It also provides a set of verification mechanisms allowing to detect different types of semantic errors in procedures.  相似文献   

15.
Stroke is the third most common cause of death and the sixth most common cause of disability worldwide. Treating acute ischemic stroke with thrombolytic therapy within 4.5 hours from symptom onset is effective in improving patient outcomes. The time from stroke onset to arrival to hospital has been identified as the single most important issue in determining patients' eligibility for stroke thrombolysis. There is a need for simultaneous systemic evaluation of multi-factorial interventions in pre-hospital acute care systems, aimed at increasing patients' eligibility for stroke thrombolysis. In this paper an OR solution is proposed in the form of a simulation model that provides clear measure of the relative benefit of alternative potential interventions, demonstrating how OR modeling can be used for providing decision support in pre-hospital stroke care operations and contributing to health OR literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号