首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
支持QoS的中间件技术在构造分布式实时嵌入式系统中得到了广泛应用,已成为支持实时发布/订阅服务的关键技术.评估并分析了QoS中间件中3种集成实时发布/订阅服务方法.重点研究了容器管理方式的性能,并与面向对象的实时发布/订阅服务比较.研究结果表明,容器管理方式中,CIAO中间件的等待时间稍长,有可预测性,适用于DRE系统.  相似文献   

2.
Distributed real-time and embedded (DRE) systems have become critical in domains such as avionics (e.g., flight mission computers), telecommunications (e.g., wireless phone services), tele-medicine (e.g., robotic surgery), and defense applications (e.g., total ship computing environments). These types of system are increasingly interconnected via wireless and wireline networks to form systems of systems. A challenging requirement for these DRE systems involves supporting a diverse set of quality of service (QoS) properties, such as predictable latency/jitter, throughput guarantees, scalability, 24x7 availability, dependability, and security that must be satisfied simultaneously in real-time. Although increasing portions of DRE systems are based on QoS-enabled commercial-off-the-shelf (COTS) hardware and software components, the complexity of managing long lifecycles (often ∼15-30 years) remains a key challenge for DRE developers and system integrators. For example, substantial time and effort is spent retrofitting DRE applications when the underlying COTS technology infrastructure changes.This paper provides two contributions that help improve the development, validation, and integration of DRE systems throughout their lifecycles. First, we illustrate the challenges in creating and deploying QoS-enabled component middleware-based DRE applications and describe our approach to resolving these challenges based on a new software paradigm called Model Driven Middleware (MDM), which combines model-based software development techniques with QoS-enabled component middleware to address key challenges faced by developers of DRE systems — particularly composition, integration, and assured QoS for end-to-end operations. Second, we describe the structure and functionality of CoSMIC (Component Synthesis using Model Integrated Computing), which is an MDM toolsuite that addresses key DRE application and middleware lifecycle challenges, including partitioning the components to use distributed resources effectively, validating software configurations, assuring multiple simultaneous QoS properties in real-time, and safeguarding against rapidly changing technology.  相似文献   

3.
Assuring end-to-end quality-of-service (QoS) in distributed real-time and embedded (DRE) systems is hard due to the heterogeneity and scale of communication networks, transient behavior, and the lack of mechanisms that holistically schedule different resources end-to-end. This paper makes two contributions to research focusing on overcoming these problems in the context of wide area network (WAN)-based DRE applications that use the OMG Data Distribution Service (DDS) QoS-enabled publish/subscribe middleware. First, it provides an analytical approach to bound the delays incurred along the critical path in a typical DDS-based publish/subscribe stream, which helps ensure predictable end-to-end delays. Second, it presents the design and evaluation of a policy-driven framework called Velox. Velox combines multi-layer, standards-based technologies—including the OMG DDS and IP DiffServ—to support end-to-end QoS in heterogeneous networks and shield applications from the details of network QoS mechanisms by specifying per-flow QoS requirements. The results of empirical tests conducted using Velox show how combining DDS with DiffServ enhances the schedulability and predictability of DRE applications, improves data delivery over heterogeneous IP networks, and provides network-level differentiated performance.  相似文献   

4.
Assuring end-to-end QoS in enterprise distributed real-time and embedded (DRE) systems is hard due to the heterogeneity and transient behavior of communication networks, the lack of integrated mechanisms that schedule communication and computing resources holistically, and the scalability limits of IP multicast in wide-area networks (WANs). This paper makes three contributions to research on overcoming these problems in the context of enterprise DRE systems that use the OMG Data Distribution Service (DDS) quality-of-service (QoS)-enabled publish/subscribe (pub/sub) middleware over WANs. First, it codifies the limitations of conventional DDS implementations deployed over WANs. Second, it describes a middleware component called Proxy DDS that bridges multiple, isolated DDS domains deployed over WANs. Third, it describes the NetQSIP framework that combines multi-layer, standards-based technologies including the OMG-DDS, Session Initiation Protocol (SIP), and IP DiffServ to support end-to-end QoS in a WAN and shield pub/sub applications from tedious and error-prone details of network QoS mechanisms. The results of experiments using Proxy DDS and NetQSIP show how combining DDS with SIP in DiffServ networks significantly improves dynamic resource reservation in WANs and provides effective end-to-end QoS management.  相似文献   

5.
Commercial off-the-shelf (COTS) middleware is now widely used to develop distributed real-time and embedded (DRE) systems. DRE systems are themselves increasingly combined to form systems of systems that have diverse quality of service (QoS) requirements. Earlier generations of COTS middleware, such as Object Request Brokers (ORBs) based on the CORBA 2.x standard, did not facilitate the separation of QoS policies from application functionality, which made it hard to configure and validate complex DRE applications. The new generation of component middleware, such as the CORBA Component Model (CCM) based on the CORBA 3.0 standard, addresses the limitations of earlier generation middleware by establishing standards for implementing, packaging, assembling, and deploying component implementations.There has been little systematic empirical study of the performance characteristics of component middleware implementations in the context of DRE systems. This paper therefore provides four contributions to the study of CCM for DRE systems. First, we describe the challenges involved in benchmarking different CCM implementations. Second, we describe key criteria for comparing different CCM implementations using key black-box and white-box metrics. Third, we describe the design of our CCMPerf benchmarking suite to illustrate test categories that evaluate aspects of CCM implementation to determine their suitability for the DRE domain. Fourth, we use CCMPerf to benchmark CIAO implementation of CCM and analyze the results. These results show that the CIAO implementation based on the more sophisticated CORBA 3.0 standard has comparable DRE performance to that of the TAO implementation based on the earlier CORBA 2.x standard.Arvind S. Krishna is a PhD student in the Electrical Engineering and Computer Science Department at Vanderbilt University and a member of the Institute for Software Integrated Systems. He received his MA in management from the Brila Institute for Technology and Science (BITS), Pilani, India and his MS in computer science from University of California, Irvine. His research interests include patterns, real-time Java technologies for Real-Time Corba, model-integrated QA techniques, and tools for partial evaluation and specialization of middleware. He is a student member of the IEEE and ACM. Contact him at the Inst. for Software Integrated Systems, 2015 Terrace Pl., Nashville, TN 37203.Balachandran Natarajan is a senior staff engineer at the Institute for Software Integrated Systems and a PhD student in electrical engineering and computer science at Vanderbilt University. His research focuses on applying patterns, optimization principles, and frameworks to build high-performance, dependable, and real-time distributed systems. He received his MS in computer science from Washington University. Contact him at the Inst. for Software Integrated Systems, 2015 Terrace Pl., Nashville, TN 37203.Aniruddha Gokhale is an assistant professor in the Electrical Engineering and Computer Science Department at Vanderbilt University and a senior research scientist at the Institute for Software Integrated Systems. His research focuses on real-time component middleware optimizations, distributed systems and networks, model-driven software synthesis applied to component middleware-based distributed systems, and distributed resource management. He received his PhD in computer science from Washington University. Contact him at the Inst. for Software Integrated Systems, 2015 Terrace Pl., Nashville, TN 37203.Douglas C. Schmidt is a professor in the Electrical Engineering and Computer Science Department at Vanderbilt University and a senior research scientist at the Institute for Software Integrated Systems. His research interests include patterns, optimization techniques, and empirical analyses of software frameworks and domain-specific modeling environments that facilitate the development of distributed real-time and embedded middleware and applications running over high-speed networks and embedded system interconnects. He received his PhD in information and computer science at the University of California, Irvine. Contact him at the Inst. for Software Integrated Systems, 2015 Terrace Pl., Nashville, TN 37203.Nanbor Wang is a Research Scientist in the Distributed Technologies Group at the Tech-X Corporation in Boulder, Colorado. He received M.S. and Ph.D. degrees in Computer Science from Washington University in St. Louis, Missouri. While working for his degree, he also worked as a Research Associate in the Center of Distributed Object Computing in the Department of Computer Science where he conducted research on design, implementation and analysis of object-oriented and component-based techniques for development of distributed systems and management of extra-functional concerns. Dr. Wangs work currently focuses on developing and applying middleware techniques, such as CORBA and Grid Computing, for enabling distributed and parallel scientific applications, such as, distributed data analysis, remote visualization and collaboration, and, work-flow management for large-scale scientific applications.Gautam H. Thaker was born in Amdavad, India, in 1955. He holds a BSEE (75) and MSEE (77) from Clemson University, Clemson, SC. He spent the 85-86 academic year at M.I.T. as a visiting researcher. His research interests include analysis, design, construction and validation of real-time, command and control systems. In particular he has focused on interactions between operating systems, networking protocols, and middleware technologies.  相似文献   

6.
Component middleware provides dependable and efficient platforms that support key functional, and quality of service (QoS) needs of distributed real-time embedded (DRE) systems. Component middleware, however, also introduces challenges for DRE system developers, such as evaluating the predictability of DRE system behavior, and choosing the right design alternatives before committing to a specific platform or platform configuration. Model-based technologies help address these issues by enabling design-time analysis, and providing the means to automate the development, deployment, configuration, and integration of component-based DRE systems. To this end, this paper applies model checking techniques to DRE design models using model transformations to verify key QoS properties of component-based DRE systems developed using Real-time CORBA. We introduce a formal semantic domain for a general class of DRE systems that enables the verification of distributed non-preemptive real-time scheduling. Our results show that model-based techniques enable design-time analysis of timed properties and can be applied to effectively predict, simulate, and verify the event-driven behavior of component-based DRE systems. This research was supported by the NSF Grants CCR-0225610 and ACI-0204028 Gabor Madl is a Ph.D. student and a graduate student researcher at the Center for Embedded Computer Systems at the University of California, Irvine. His advisor is Nikil Dutt. His research interests include the formal verification, optimization, component-based composition, and QoS management of distributed real-time embedded systems. He received his M.S. in computer science from Vanderbilt University and in computer engineering from the Budapest University of Technology and Economics. Dr. Sherif Abdelwahed received his Ph.D. degree in Electrical and Computer Engineering from the University of Toronto, Canada, in 2001. During 2000–2001, he was a research scientist with the system diagnosis group at the Rockwell Scientific Company. Since 2001 he has been with the Department of Electrical Engineering and Computer Science at Vanderbilt University as a Research Assistant Professor. His research interests include verification and control of distributed real-time systems, and model-based diagnosis of discrete-event and hybrid systems. Dr. Douglas C. Schmidt is a Professor of Computer Science, Associate Chair of the Computer Science and Engineering program, and a Senior Researcher in the Institute for Software Integrated Systems (ISIS) all at Vanderbilt University. He has published over 300 technical papers and 6 books that cover a range of research topics, including patterns, optimization techniques, and empirical analyses of software frameworks and domain-specific modeling environments that facilitate the development of distributed real-time and embedded (DRE) middleware and applications. Dr. Schmidt has served as a Deputy Office Director and a Program Manager at DARPA, where he lead the national R&D effort on middleware for DRE systems. In addition to his academic research and government service, Dr. Schmidt has over fifteen years of experience leading the development of ACE, TAO, CIAO, and CoSMIC, which are widely used, open-source DRE middleware frameworks and model-driven tools that contain a rich set of components and domain-specific languages that implement patterns and product-line architectures for high-performance DRE systems.  相似文献   

7.
8.
Distributed real-time and embedded (DRE) systems in which application requirements and environmental conditions may not be known a priori—or which may vary at run-time—can benefit from an adaptive approach to management of quality-of-service (QoS) to meet key constraints, such as end-to-end timeliness. Moreover, coordinated management of multiple QoS capabilities across multiple layers of applications and their supporting middleware can help to achieve necessary assurances of meeting these constraints.This paper offers two contributions to the study of adaptive DRE computing systems: (1) a case study of our integration of multiple middleware QoS management technologies to manage quality and timeliness of imagery adaptively within a representative DRE avionics system and (2) empirical results and analysis of the impact of that integration on key trade-offs between timeliness and image quality in that system.This work was supported in part by AFRL contract F33615-97-D-1155/0005 (WSOA), NSF ITR CCR-0312859, Siemens, and DARPA/AFRL contracts F33615-03-C-4112, F30602-98-C-0187 and F33615-00-C-1694. Approved for public release, distribution unlimited.Christopher D. Gill is an Assistant Professor in the Department of Computer Science and Engineering at Washington University in St. Louis. He has published over 50 refereed technical articles in leading journals, conferences, workshops, and book series. His research focuses on distributed real-time embedded systems, with particular emphasis on adaptive mresource management, scheduling, and software design and implementation for time-and-space constrained systems. Dr. Gill has chaired numerous workshop and conference program committees, and has participated widely in review panels and standards organizations in the distributed and real-time systems areas. The research he has led has produced several freely available open-source software frameworks including the Kokyu scheduling and dispatching framework and the nORB small-footprint real-time object request broker.Jeanna Gossett joined The Boeing Company in 1999 as a member of Bold Stroke/Open Systems Architecture team. Jeanna has worked on several CRAD projects including Weapon Systems Open Architecture (WSOA) where she was responsible for incorporating quality of service and resource management software technology into the fighter aircraft real-time embedded system application. Jeanna has since joined the F/A-18 New Product Development Mission Systems team. Prior to joining The Boeing Company in 1999, she worked in the telecommunications industry as an embedded systems developer at Ericsson and Siemens AG. Jeanna received a B.S. in Electrical Engineering from Southern Illinois University, Edwardsville and is a 2005 M.B.A. candidate at Washington University in St. Louis.David Corman is a Technical Fellow at the Boeing Company, located in St. Louis, Mo. Dave is the chief scientist for the Network Centric Operations (NCO) thrust in Phantom Works (PW) and is responsible for developing the NCO technology research agenda and investment strategy. He is also the Principle Investigator (PI) for a variety of Air Force and Defense Advanced Research Project Agency (DARPA) programs that are producing technologies for integrating legacy platforms into the emerging Global Information Grid and for autonomous control of unmanned systems. Since joining the former McDonnell-Douglas (now part of the Boeing Company) in 1983, Dave has worked on numerous projects ranging from embedded systems to large C4I and weapon systems. A major focus of Daves career has been on the development of C4I system simulations and in mission planning system development for aircraft and missiles. He has also served as a consultant to many weapon system and C4I programs in St. Louis, Seattle, and California. Prior to joining McDonnell-Douglas, Dave spent five years at the Johns Hopkins University Applied Physics Laboratory. He was the first recipient of a Naval Research Laboratory Fellowship from the University of Maryland—College Park where he received his PhD in Electrical Engineering in 1983.Joseph Loyall is a division scientist at BBN Technologies, where he leads the Distributed Real-time Embedded (DRE) systems research thrust in the Distributed Systems Advanced Middleware Technology group. He is actively involved in developing integrated dynamic resource management capabilities and advanced software engineering using model driven architecture (MDA) approaches, and in applying adaptive behavior to operational embedded systems such as collections of unmanned and manned air vehicles. Dr. Loyall has a Ph.D. and M.S. in computer science from the University of Illinois and a B.S. in computer science from Indiana University. He can be contacted at jloyall@bbn.com.Richard E. Schantz is a principal scientist at BBN Technologies in Cambridge, Mass., where he has been a key contributor to advanced distributed computing R&D for the past 30 years. His research has been instrumental in defining and evolving the concepts underlying middleware since its emergence in the early days of the Internet. He was directly responsible for developing the first operational distributed object computing capability and transitioning it to production use. More recently, he has led research efforts toward developing and demonstrating the effectiveness of middleware support for adaptively managed Quality Of Service control, as principal investigator on a number of key DARPA projects in the areas of adaptive real-time behavior, survivability and advanced software engineering. Schantz received his Ph.D. degree in Computer Science from the State University of New York at Stony Brook, in 1974.Michael Atighetchi is a senior scientist at BBN Technologies and a senior member of the Distributed Systems Advanced Middleware Technology group. His interests include use of adaptation in survivable systems, network and operating system security, and distributed coordination. Contact him at matighet@bbn. comDouglas C. Schmidt (d.schmidt@vanderbilt.edu) is a Professor of Electrical Engineering and Computer Science, Associate Chair of the Computer Science and Engineering program, and a Senior Researcher in the Institute for Software Integrated Systems (ISIS) at Vanderbilt University. He has published over 300 technical papers and books that cover a range of research topics, including patterns, optimization techniques, and empirical analyses of software frameworks and domain-specific modeling environments that facilitate the development of distributed real-time and embedded (DRE) middleware and applications running over high-speed networks and embedded system interconnects. Dr. Schmidt has served as a Deputy Office Director and a Program Manager at DARPA, where he led the national R&D effort on middleware for DRE systems.  相似文献   

9.
把运用于商务应用和桌面系统的中间件和构件化开发思想应用于分布式实时嵌入式(DistributedReal-timeandEmbedded,DRE)软件领域是当前的一个热门研究话题。CORBA构件模型(CORBAComponentModel,CCM)解决了跨平台语言无关的构件化开发问题,然而在提供QoS保证上CCM存在设计缺陷。论文首先分析了CCM的总体构架,接着提出了一种支持DRE软件开发的新的构件模型Z-CCM,这种构件模型从构件的实现框架、装配过程和运行时环境三方面对CCM进行了优化,以改进CCM在提供QoS保证上的缺陷,从而可以提高DRE软件的开发效率,文章最后介绍了Z-CCM的应用背景。  相似文献   

10.
一种面向服务的权限管理模型   总被引:19,自引:0,他引:19  
面向服务的体系结构(Service-Oriented Architecture,SOA)是设计和构建松耦合软件系统的方法,它可将基于中间件开发的分布式应用共享为Internet环境下的软件服务.传统中间件的用户权限系统具有较好的灵活性,基本满足封闭系统的安全需求.但在SOA模式下,难以满足不同节点和系统互相请求服务和共享资源过程中的授权.该文提出了一个面向服务的权限管理模型,通过支持用户之间的代理和提供一定的推理能力,为应用开发者提供了更完善的权限管理机制,并扩展了中间件跨越组织共享资源和服务的能力.该模型在一个J2EE应用服务器上被实现和验证.实验证明,该模型具有良好的灵活性和可扩展性,并且性能影响在合理的范围.  相似文献   

11.
本文在分析现有嵌入式实时中间件的基础上,给出了一种随环境动态变化而调整服务策略的分布式嵌入式实时中间件QoS体系结构,并分析了两种保证服务质量的资源管理策略。  相似文献   

12.
一种中间件服务容错配置管理方法   总被引:1,自引:0,他引:1  
李军国  黄罡  邹键  梅宏 《计算机学报》2007,30(10):1696-1704
提出一种基于运行时刻软件体系结构的容错管理方法,支持开发者和管理员针对不同中间件服务失效定制合适的故障检测和修复机制.首先,运行时刻软件体系结构自动构造构件依赖视图和错误传播①视图,为理解和分析整个系统的可靠性提供全局视图;然后,操作运行时刻软件体系结构配置容错机制;最后利用AOP技术将容错机制插装到中间件中,使其具备指定的容错能力.上述过程在一个可视化工具的辅助下半自动实施,并在J2EE中间件上得到验证.  相似文献   

13.
Computer and network security is becoming increasingly important as both large systems and, increasingly small, embedded systems are networked. Middleware frameworks aid the system developer who must interconnect individual systems into larger interconnected, distributed systems. However, there exist very few middleware frameworks that have been designed for use with embedded systems, which constitute the vast majority of CPUs produced each year, and none offer the range of security mechanisms required by the wide range of embedded system applications. This paper describes MicroQoSCORBA, a highly configurable middleware framework for embedded systems, and its security subsystem. It first presents an analysis of security requirements for embedded applications and what can and should be done in middleware. It then presents the design of MicroQoSCORBA’s security subsystem and the wide range of mechanisms it supports. Experimental results for these mechanisms are presented for two different embedded systems and one desktop computer that collectively represent a wide range of computational capabilities.  相似文献   

14.
Real-time and embedded systems have traditionally been designed for closed environments where operating conditions, input workloads, and resource availability are known a priori, and are subject to little or no change at runtime. There is increasing demand, however, for adaptive capabilities in distributed real-time and embedded (DRE) systems that execute in open environments where system operational conditions, input workload, and resource availability cannot be characterized accurately a priori. A challenging problem faced by researchers and developers of such systems is devising effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements of applications. To address key resource management challenges of open DRE systems, this paper presents the Hierarchical Distributed Resource-management Architecture (HiDRA), which provides adaptive resource management using control techniques that adapt to workload fluctuations and resource availability for both bandwidth and processor utilization simultaneously. This paper presents three contributions to research in adaptive resource management for DRE systems. First, we describe the structure and functionality of HiDRA. Second, we present an analytical model of HiDRA that formalizes its control-theoretic behavior and presents analytical assurance of system performance. Third, we evaluate the performance of HiDRA via experiments on a representative DRE system that performs real-time distributed target tracking. Our analytical and empirical results indicate that HiDRA yields predictable, stable, and efficient system performance, even in the face of changing workload and resource availability.  相似文献   

15.
A stock market data processing system that can handle high data volumes at low latencies is critical to market makers. Such systems play a critical role in algorithmic trading, risk analysis, market surveillance, and many other related areas. The current systems tend to use specialized software and custom processors. We show that such a system can be built with general‐purpose middleware and run on commodity hardware. The middleware we use is IBM System S which includes transport technology from IBM WebSphere MQ Low Latency Messaging (LLM). Our performance evaluation consists of two parts. First, we determined the effectiveness of each system optimization that the hardware and software infrastructure makes available. These optimizations were implemented at all software levels—application, middleware, and operating system. Second, we evaluated our system on different hardware platforms. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
Wireless Sensor Networks (WSNs) are useful for a wide range of applications, from different domains. Recently, new features and design trends have emerged in the WSN field, making those networks appealing not only to the scientific community but also to the industry. One such trend is the running different applications on heterogeneous sensor nodes deployed in multiple WSNs in order to better exploit the expensive physical network infrastructure. Another trend deals with the capability of accessing sensor generated data from the Web, fitting WSNs in novel paradigms of Internet of Things (IoT) and Web of Things (WoT). Using well-known and broadly accepted Web standards and protocols enables the interoperation of heterogeneous WSNs and the integration of their data with other Web resources, in order to provide the final user with value-added information and applications. Such emergent scenarios where multiple networks and applications interoperate to meet high level requirements of the user will pose several changes in the design and execution of WSN systems. One of these challenges regards the fact that applications will probably compete for the resources offered by the underlying sensor nodes through the Web. Thus, it is crucial to design mechanisms that effectively and dynamically coordinate the sharing of the available resources to optimize resource utilization while meeting application requirements. However, it is likely that Quality of Service (QoS) requirements of different applications cannot be simultaneously met, while efficiently sharing the scarce networks resources, thus bringing the need of managing an inherent tradeoff. In this paper, we argue that a middleware platform is required to manage heterogeneous WSNs and efficiently share their resources while satisfying user needs in the emergent scenarios of WoT. Such middleware should provide several services to control running application as well as to distribute and coordinate nodes in the execution of submitted sensing tasks in an energy-efficient and QoS-enabled way. As part of the middleware provided services we present the Resource Allocation in Heterogeneous WSNs (SACHSEN) algorithm. SACHSEN is a new resource allocation heuristic for systems composed of heterogeneous WSNs that effectively deals with the tradeoff between possibly conflicting QoS requirements and exploits heterogeneity of multiple WSNs.  相似文献   

17.
Configuration and coordination are central issues in the design and implementation of middleware systems and are one of the reasons why building such systems is more complex than constructing stand‐alone sequential programs. Through configuration, the structure of the system is established—which elements it contains, where they are located and how they are interconnected. Coordination is concerned with the interaction of the various components—when an interaction takes place, which parties are involved, what protocols are followed. Its purpose is to coordinate the behaviour of the various components to meet the overall system specification. The open and adaptive nature of middleware systems makes the task of configuration and coordination particularly challenging. We propose a model that can operate in such an environment and enables the dynamic integration and coordination of components by observing and coercing their behaviour through the interception of the messages exchanged between them. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
Many-task computing is a well-established paradigm for implementing loosely coupled applications (tasks) on large-scale computing systems. However, few of the model’s existing implementations provide efficient, low-latency support for executing tasks that are tightly coupled multiprocessing applications. Thus, a vast array of parallel applications cannot readily be used effectively within many-task workloads. In this work, we present JETS, a middleware component that provides high performance support for many-parallel-task computing (MPTC). JETS is based on a highly concurrent approach to parallel task dispatch and on new capabilities now available in the MPICH2 MPI implementation and the ZeptoOS Linux operating system. JETS represents an advance over the few known examples of multilevel many-parallel-task scheduling systems: it more efficiently schedules and launches many short-duration parallel application invocations; it overcomes the challenges of coupling the user processes of each multiprocessing application invocation via the messaging fabric; and it concurrently manages many application executions in various stages. We report here on the JETS architecture and its performance on both synthetic benchmarks and an MPTC application in molecular dynamics.  相似文献   

19.
Scheduling is a key component for performance guarantees in the case of distributed applications running in large scale heterogeneous environments. Another function of the scheduler in such system is the implementation of resilience mechanisms to cope with possible faults. In this case resilience is best approached using dedicated rescheduling mechanisms. The performance of rescheduling is very important in the context of large scale distributed systems and dynamic behavior. The paper proposes a generic rescheduling algorithm. The algorithm can use a wide variety of scheduling heuristics that can be selected by users in advance, depending on the system’s structure. The rescheduling component is designed as a middleware service that aims to increase the dependability of large scale distributed systems. The system was evaluated in a real-world implementation for a Grid system. The proposed approach supports fault tolerance and offers an improved mechanism for resource management. The evaluation of the proposed rescheduling algorithm was performed using modeling and simulation. We present experimental results confirming the performance and capabilities of the proposed rescheduling algorithm.  相似文献   

20.
During the last decade, a new direction of distributed computing—the grid—emerged, which is designed for work with sets of distributed resources. The results obtained, including the development of largescale grid infrastructures, bring us to the discussion of possibility of applying the new technologies to practice. The goal of the paper is to outline a scope of grid capabilities. The discussion relies on the formulation of basic points of the grid concept, principles of integration of spatially distributed resources, and the tasks solved by the grid middleware. Based on this, forms and methods of using grid integration technologies for work with computer, file, information, and other types of resources are described.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号