首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In recent days, due to the rapid technological advancements, the grid computing has become an important area of research in distributed systems. The load balancing is a very important and complex problem in grid computing. In this paper, we propose a dynamic-distributed load-balancing technique called the improved load balancing on enhanced GridSim with deadline control (IEGDC) for computational grids. Here, we provide a new mechanism of scheduling to enhance the utilization of the resources and to prevent the resource overloading. A selection method for scheduling by considering the state of resource bandwidth and capacity of various resources is presented. We simulate the proposed load-balancing strategy on the GridSim platform. The proposed mechanism on comparison is found to outperform the existing schemes in terms of response time, resubmitted time, finished and unfinished Gridlets. The simulation results are presented.  相似文献   

2.
The power grid is undergoing a major change due mainly to the increased penetration of renewables and novel digital instruments in the hands of the end users that help to monitor and shift their loads. Such transformation is only possible with the coupling of an information and communication technology infrastructure to the existing power distribution grid. Given the scale and the interoperability requirements of such future system, service-oriented architectures (SOAs) are seen as one of the reference models and are considered already in many of the proposed standards for the smart grid (e.g., IEC-62325 and OASIS eMIX). Beyond the technical issues of what the service-oriented architectures of the smart grid will look like, there is a pressing question about what the added value for the end user could be. Clearly, the operators need to guarantee availability and security of supply, but why should the end users care? In this paper, we explore a scenario in which the end users can both consume and produce small quantities of energy and can trade these quantities in an open and deregulated market. For the trading, they delegate software agents that can fully interoperate and interact with one another thus taking advantage of the SOA. In particular, the agents have strategies, inspired from game theory, to take advantage of a service-oriented smart grid market and give profit to their delegators, while implicitly helping balancing the power grid. The proposal is implemented with simulated agents and interaction with existing Web services. To show the advantage of the agent with strategies, we compare our approach with the “base” agent one by means of simulations, highlighting the advantages of the proposal.  相似文献   

3.
Scheduling stochastic workloads is a difficult task. In order to design efficient scheduling algorithms for such workloads, it is required to have a good in-depth knowledge of basic random scheduling strategies. This paper analyzes the distribution of sequential jobs and the system behavior in heterogeneous computational grid environments where the brokering is done in such a way that each computing element has a probability to be chosen proportional to its number of CPUs and (new from the previous paper) its relative speed. We provide the asymptotic behavior for several metrics (queue-sizes, slowdowns, etc.) or, in some cases, an approximation of this behavior. We study these metrics for a variety of workload configurations (load, distribution, etc.). We compare our probabilistic analysis to simulations in order to validate our results. These results provide a good understanding of the system behavior for each metric proposed. This enables us to design advanced and efficient algorithms for more complex cases.  相似文献   

4.
Multi-agent systems are widely used to address large-scale distributed combinatorial applications in the real world. One such application is meeting scheduling (MS), which is defined by a variety of features. The MS problem is naturally distributed and especially subject to many alterations. In addition, this problem is characterized by the presence of users’ preferences that turn it into a search for an optimal rather than a feasible solution. However, in real-world applications users usually have conflicting preferences, which makes the solving process an NP-hard problem. Most research efforts in the literature, adopting agent-based technologies, tackle the MS problem as a static problem. They often share some common properties: allowing the relaxation of any user's time restriction, not dealing with achieving any level of consistency among meetings to enhance the efficiency of the solving process, not tackling the consequences of the dynamic environment, and especially not addressing the real difficulty of distributed systems which is the complexity of message passing operations.In an attempt to facilitate and streamline the process of scheduling meetings in any organization, the main contribution of this work is a new scalable agent-based approach for any dynamic MS problem (that we called MSRAC, for Meeting Scheduling with Reinforcement of Arc Consistency). In this approach we authorize only the relaxation of users’ preferences while maintaining arc-consistency on the problem. The underlying protocol can efficiently reach the optimal solution (satisfying some predefined optimality criteria) whenever possible, using only minimum localized asynchronous communications. This purpose is achieved with minimal message passing while trying to preserve at most the privacy of involved users. Detailed experimental results on randomly generated MS problems show that MSRAC is scalable and it leads to speed up over other approaches, especially for large problems with strong constraints.  相似文献   

5.
A Network-On-Chip (NoC) platform is an emerging topology for large-scale applications. It provides a required number of resources for critical and excessive computations. However, the computations may be interrupted by faults occurring at run-time. Hence, reliability of computations as well as efficient resource management at run-time are crucial for such many-core NoC systems. To achieve this, we utilize an agent-based management system where agents are organized in a three-level hierarchy. We propose to incorporate reallocation and reconfiguration procedures into agents hierarchy such that fault-tolerance mechanisms can be executed at run-time. Task reallocation enables local reconfiguration of a core allowing it to be eventually reused in order to restore the original performance of communication and computations. The contributions of this paper are: (i) an algorithm for initial application mapping with spare cores, (ii) a multi-objective algorithm for efficient utilization of spare cores at run-time in order to enhance fault-tolerance while maintaining efficiency of communication and computations at an adequate level, (iii) an algorithm integrating the local reconfiguration procedure and (iv) formal modeling and verification of the dynamic agent-based NoC management architecture incorporating these algorithms within the Event-B framework.  相似文献   

6.
The rising popularity of open source software (OSS) calls for a better understanding of the drivers of its adoption and diffusion. In this research, we propose an integrated framework that simultaneously investigates a broad range of social and economic factors on the diffusion dynamics of OSS using an Agent Based Computational Economics (ACE) approach. We find that interoperability costs, variability of OSS support costs, and duration of PS upgrade cycle are major determinants of OSS diffusion. Furthermore, there are interaction effects between network topology, network density and interoperability costs, which strongly influence the diffusion dynamics of OSS. The proposed model can be used as a building block to further investigate complex competitive dynamics in software markets.  相似文献   

7.
In recent years, Web applications (WAs) have grown so quickly that they have already become crucial to the success of businesses. However, since they are built on Internet and open standard technologies, WAs possess their own unique features, such as dynamic behaviors, heterogeneous representations, and novel data handling mechanisms. These features provide concrete support to the success of WAs, but they bring new challenges to researchers and developers, especially in regard to testing WAs and ensuring their quality. Testing approaches for non-WAs have to be extended to handle these features before they are used in WA testing. This paper presents an agent-based approach to perform data-flow testing of WAs. More precisely, the data-flow testing will be performed by autonomous test agents at the method level, object level, and object cluster level, from low abstraction level to high abstraction level. In the process of the recommended data-flow testing, an agent-based WA testing system (WAT) will automatically generate and coordinate test agents to decompose the task of testing an entire WA into a set of subtasks that can be accomplished by test agents. The test agents, rooted in the Belief–Desire–Intention (BDI) model, cooperate with each other to complete the testing of a WA. An example is used to show the feasibility of the proposed approach.  相似文献   

8.
Guiyi Wei  Bin Xiao 《Information Sciences》2010,180(23):4543-4556
In a traditional computational grid environment, the owners of resources usually provide information about their resources extracted by pre-configured information services or web services. However, such information is not sufficient for the scheduler in the high-performance distributed computing. To solve this problem, we propose a scalable grid information service framework, named PIVOT (adaPtive Information discoVery framewOrk for compuTational grid). By using deadline-constrained flooding collector dissemination and P2P-like information collection schemes, PIVOT provides an active mechanism to collect application-specific resource information. In particular, PIVOT provides a resource information service for application-specific schedulers. The best-effort performance on overhead traffic and communication latency during information discovery is guaranteed by two new distributed cooperative algorithms. The experimental results in the simulations and real computational grid platform demonstrate that PIVOT has a high level of adaptability for application-specific resource information discovery, and also improves the accuracy of resource allocation and the efficiency of executing parallel tasks in traditional information services.  相似文献   

9.
All existing fault-tolerance job scheduling algorithms for computational grids were proposed under the assumption that all sites apply the same fault-tolerance strategy. They all ignored that each grid site may have its own fault-tolerance strategy because each site is itself an autonomous domain. In fact, it is very common that there are multiple fault-tolerance strategies adopted at the same time in a large-scale computational grid. Various fault-tolerance strategies may have different hardware and software requirements. For instance, if a grid site employs the job checkpointing mechanism, each computation node must have the following ability. Periodically, the computational node transmits the transient state of the job execution to the server. If a job fails, it will migrate to another computational node and resume from the last stored checkpoint. Therefore, in this paper we propose a genetic algorithm for job scheduling to address the heterogeneity of fault-tolerance mechanisms problem in a computational grid. We assume that the system supports four kinds fault-tolerance mechanisms, including the job retry, the job migration without checkpointing, the job migration with checkpointing, and the job replication mechanisms. Because each fault-tolerance mechanism has different requirements for gene encoding, we also propose a new chromosome encoding approach to integrate the four kinds of mechanisms in a chromosome. The risk nature of the grid environment is also taken into account in the algorithm. The risk relationship between jobs and nodes are defined by the security demand and the trust level. Simulation results show that our algorithm has shorter makespan and more excellent efficiencies on improving the job failure rate than the Min–Min and sufferage algorithms.  相似文献   

10.
11.
This paper presents an automated system for multiple sensor placement based on the coordinated decisions of independent, intelligent agents. The problem domain is such that a single sensor system would not provide adequate information for a given sensor task. Hence, it is necessary to incorporate multiple sensors in order to obtain complete information. The overall goal of the system is to provide the surface coverage necessary to perform feature inspection on one or more target objects in a cluttered scene. This is accomplished by a group of cooperating intelligent sensors. In this system, the sensors are mobile, the target objects are stationary and each agent controls the position of a sensor and has the ability to communicate with other agents in the environment. By communicating desires and intentions, each agent develops a mental model of the other agents' preferences, which is used to avoid or resolve conflict situations. In this paper we utilize cameras as the sensors. The experimental results illustrate the feasibility of the autonomous deployment of the sensors and that this deployment can occur with sufficient accuracy as to allow the inspection task to be performed.  相似文献   

12.
Tool integration is a very difficult challenge. Problems may arise at different abstraction levels and from several sources such as heterogeneity of manipulated data, incompatible interfaces, or uncoordinated services, to name just a few examples. On the other hand, applications based on the coherent composition of activities, components, services, and data from heterogeneous sources are increasingly present in our everyday lives. Consequently, tool integration takes on increasing significance.In this paper we analyze the tool-integration problem at different abstraction levels and discuss different views on a layered software architecture that we have designed specifically for a middleware that supports the execution of distributed applications for the orchestration of human/system activities. We noticed that the agent paradigm provided a suitable technology for abstraction in tool integration. Throughout the paper, the discussion refers to a case study in the bioinformatics domain.  相似文献   

13.
This paper proposes a two-level scheduler for dynamically scheduling a continuous stream of sequential and multi-threaded batch jobs on grids, made up of interconnected clusters of heterogeneous single-processor and/or symmetric multiprocessor machines. The scheduler aims to schedule arriving jobs respecting their computational and deadline requirements, and optimizing the hardware and software resource usage. At the top of the hierarchy a lightweight meta-scheduler (MS) classifies incoming jobs according to their requirements, and schedules them among the underlying resources balancing the workload. At cluster level a Flexible Backfilling algorithm carries out the job machine associations by exploiting dynamic information about the environment. Scheduling decisions at both levels are based on job priorities computed by using different sets of heuristics. The different proposals have been compared through simulations. Performance figures show the feasibility of our approach.  相似文献   

14.
This paper focuses on research on virtual supply chain networks instead of real supply chain networks by making use of agent technology and computational experiment method. However, the recent research is inefficient in computational experiment modeling and lack of a related methodological framework. This paper proposes an agent-based distributed computational experiment framework with in-depth study of material flow, information flow and time flow modeling in supply chain networks. In this framework, a matrix-based formal representation method for material flow, a task-centered representation method for information flow and an agent-based time synchronization mechanism for time flow are proposed to aid building a high quality computational experiment model for a multi-layer supply chain network. In order to conduct the model, a computational experiment architecture for virtual supply chain networks is proposed. In this architecture, coordination mechanisms among agents based on material flow, information flow and time flow as well as consistency check methods for computational experiment models are discussed. Finally, an implementation architecture of the framework is given and a case of virtual supply chain network is developed to illustrate the application of the framework. The computational experiment results of the case show that the proposed framework, not only feasible but correct, has sound advantages in virtual supply chain network development, computational experiment modeling and implementation.  相似文献   

15.
A major problem facing manufacturing organisations is how to provide efficient and cost-effective responses to the unpredictable changes taking place in a global market. This problem is made difficult by the complexity of supply chain networks coupled with the complexity of individual manufacturing systems within supply chains. Current systems such as manufacturing execution systems (MES), supply chain management (SCM) systems and enterprise resource planning (ERP) systems do not provide adequate facilities for addressing this problem. This paper presents an approach that would enable manufacturing organisations to dynamically and cost-effectively integrate, optimise, configure, simulate, restructure and control not only their own manufacturing systems but also their supply networks, in a co-ordinated manner to cope with the dynamic changes occurring in a global market. This is realised by a synergy of two emerging manufacturing concepts: Agent-based agile manufacturing systems and e-manufacturing. The concept is to represent a complex manufacturing system and its supply network with an agent-based modelling and simulation architecture and to dynamically generate alternative scenarios with respect to planning, scheduling, configuration and restructure of both the manufacturing system and its supply network based on the coordinated interactions amongst agents.  相似文献   

16.
DPP: An agent-based approach for distributed process planning   总被引:4,自引:2,他引:4  
A changing shop floor environment characterized by larger variety of products in smaller batch sizes requires creating an intelligent and dynamic process planning system that is responsive and adaptive to the rapid adjustment of production capacity and functionality. In response to the requirement, this research proposes a new methodology of distributed process planning (DPP). The primary focus of this paper is on the architecture of the new process planning approach, using multi-agent negotiation and cooperation. The secondary focus is on the other supporting technologies such as machining feature-based planning and function block-based control. Different from traditional methods, the proposed approach uses two-level decision-making—supervisory planning and operation planning. The former focuses on product data analysis, machine selection, and machining sequence planning, while the latter considers the detailed working steps of the machining operations inside of each process plan and is accomplished by intelligent NC controllers. By the nature of decentralization, the DPP shows promise of improving system performance within the continually changing shop floor environment.  相似文献   

17.
Advances in network technologies and the emergence of Grid computing have both increased the need and provided the infrastructure for computation and data intensive applications to run over collections of heterogeneous and autonomous nodes. In the context of database query processing, existing parallelisation techniques cannot operate well in Grid environments because the way they select machines and allocate tasks compromises partitioned parallelism. The main contribution of this paper is the proposal of a low-complexity, practical resource selection and scheduling algorithm that enables queries to employ partitioned parallelism, in order to achieve better performance in a Grid setting. The evaluation results show that the scheduler proposed outperforms current techniques without sacrificing the efficiency of resource utilisation. Recommended by: Ioannis Vlahavas  相似文献   

18.
Due to the dynamic fluctuation of customer demands in the global market, manufacturing enterprises are facing difficulties in rapidly responding to market changes. The aim of this research is to develop a system to integrate dynamic process planning and dynamic production scheduling for the purpose of increasing the responsiveness of adaptive manufacturing systems in accommodating dynamic market changes (rapidly changing demand patterns or product varieties). The concept of Multi-Agent Systems (MAS) has been adopted in this study. All of the tasks related to process planning, optimization and scheduling in this system are carried out by autonomous agents that are capable of interacting and negotiating with each other to bid for jobs and make decisions. On the one hand, this system optimizes the utilization of manufacturing resources and on the other hand, it also provides a platform where the reconfiguration of manufacturing systems can be assessed. This system has been implemented on a Java platform and a case study is provided to elaborate on this system and evaluate its implementation.  相似文献   

19.
20.
Based on the 50 papers surveyed in Reference,2) this paper addresses general research trends in agent-based macroeconomics. On the aspect ofagent engineering, we highlight two major developments: first, the extensive applications of computational intelligence tools in modeling adaptive behavior, and second the grounding of these applications in the cognitive sciences. Shu-Heng Chen, Ph.D.: He is a professor in the Department of Economics of the National Chengchi University. He now serves as the director of the AI-ECON Research Center, National Chengchi University, the editor-in-chief of the forthcoming journal “Fuzzy Mathematics and Natural Computing” (World Scientific) and a member of the Editorial Board of The Journal of Management and Economics. Dr. Chen holds a M.A. degree in mathematics and a Ph.D. in Economics from the University of California at Los Angeles. His research interests are mainly on the applications of computational intelligence to the agent-based computational economics and finance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号