首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
Load balancing and task partitioning are important components of distributed computing. The optimum performance from the distributed computing system is achieved by using effective scheduling and load balancing strategy. Researchers have well explored CPU, memory, and I/O-intensive tasks scheduling, and load balancing techniques. But one of the main obstacles of the load balancing technique leads to the ignorance of applications having a mixed nature of tasks. This is because load balancing strategies developed for one kind of job nature are not effective for the other kind of job nature. We have proposed a load balancing scheme in this paper, which is known as Mixed Task Load Balancing (MTLB) for Cluster of Workstation (CW) systems. In our proposed MTLB strategy, pre-tasks are assigned to each worker by the master to eliminate the worker’s idle time. A main feature of MTLB strategy is to eradicate the inevitable selection of workers. Furthermore, the proposed MTLB strategy employs Three Resources Consideration (TRC) for load balancing (CPU, Memory, and I/O). The proposed MTLB strategy has removed the overheads of previously proposed strategies. The measured results show that MTLB strategy has a significant improvement in performance.  相似文献   

2.
We consider the following general problem modeling load balancing in a variety of distributed settings. Given an arbitrary undirected connected graph G=(V,E) and a weight distribution w 0 on the nodes, determine a schedule to move weights across edges in each step so as to (approximately) balance the weights on the nodes. We focus on diffusive schedules for this problem. All previously studied diffusive schedules can be modeled as w t+1 = M w t where w t is the weight distribution after t steps and M is a doubly stochastic matrix. We call these the first-order schedules. First-order schedules, although widely used in practice, are often slow. In this paper we introduce a new direction in diffusive schedules by considering schedules that are modeled as: w 1 =M w 0 ;w t+1 =β M w t + (1-β) w t-1 for some appropriate β; we call these the second-order schedules. In the idealized setting of weights being real numbers, we adopt known results to show that β can be chosen so that the second-order schedule involves significantly fewer steps than the first-order method for approximate load balancing. In the realistic setting when the weights are positive integers, we simulate the idealized schedules by maintaining I Owe You units on the edges. Extensive experiments with simulated data and real-life data from JOSTLE, a mesh-partitioning software, show that the resultant realistic schedule is close to the idealized schedule, and it again involves fewer steps than the first-order schedules for approximate load balancing. Our main result is therefore a fast algorithm for coarse load balancing that can be used in a variety of applications. Received October 1996, and in final form January 1998.  相似文献   

3.
Although e-Research has received much attention and acclaim in recent years, the realities of distributed collaboration still challenge even the most well-planned endeavors. This case study of an e-Research project examines the ‚balancing acts’ associated with multidisciplinary, geographically distributed, large-scale research and development work. After briefly describing the history and organizational design of this information technology and atmospheric science research project, I identify five paradoxical challenges that cannot be resolved: research versus development, harmony versus conflict, consensus versus top-down decision making, frequency and modes of communication, and fast versus slow pacing. Although collaboration and communication technologies supported the project’s management and organization, most of the complexities faced by the team were not technological in nature. From the five paradoxical challenges associated with the project, I distill three cross-cutting issues that could be relevant to other e-Research projects of this magnitude: satisfying the multiple needs of a multidisciplinary project, managing information, and engaging all participants. I identify the practical implications of these challenges and issues, specifically that organizational and low-tech solutions – not the introduction of more sophisticated technology tools – are needed to solve these challenges and to better streamline coordination.  相似文献   

4.
On the Analysis of Randomized Load Balancing Schemes   总被引:2,自引:0,他引:2  
It is well known that simple randomized load balancing schemes can balance load effectively while incurring only a small overhead, making such schemes appealing for practical systems. In this paper we provide new analyses for several such dynamic randomized load balancing schemes. Our work extends a previous analysis of the supermarket model, a model that abstracts a simple, efficient load balancing scheme in the setting where jobs arrive at a large system of parallel processors. In this model, customers arrive at a system of n servers as a Poisson stream of rate λ n , λ < 1 , with service requirements exponentially distributed with mean 1. Each customer chooses d servers independently and uniformly at random from the n servers, and is served according to the First In First Out (FIFO) protocol at the choice with the fewest customers. For the supermarket model, it has been shown that using d=2 choices yields an exponential improvement in the expected time a customer spends in the system over d=1 choice (simple random selection) in equilibrium. Here we examine several variations, including constant service times and threshold models, where a customer makes up to d successive choices until finding one below a set threshold. Our approach involves studying limiting, deterministic models representing the behavior of these systems as the number of servers n goes to infinity. Results of our work include useful general theorems for showing that these deterministic systems are stable or converge exponentially to fixed points. We also demonstrate that allowing customers two choices instead of just one leads to exponential improvements in the expected time a customer spends in the system in several of the related models we study, reinforcing the concept that just two choices yields significant power in load balancing. Received November 18, 1997, and in final form September 9, 1998.  相似文献   

5.
分布式文件系统的动态负载均衡算法   总被引:1,自引:1,他引:0  
为了解决分布式文件系统的负载均衡问题,研究了多种负载均衡算法,分析各种算法的优缺点.综合了静态权重轮询算法和动态负载均衡算法两类算法的优点,提出一种自适应的综合动态负载均衡算法,它实现简单、降低了获取反馈信息的开销且不失实时性.经过试验比较,该算法能有效地降低平均响应时间和提高吞吐量,负载的分配更为均衡.  相似文献   

6.
We consider the problem of learning in repeated general-sum matrix games when a learning algorithm can observe the actions but not the payoffs of its associates. Due to the non-stationarity of the environment caused by learning associates in these games, most state-of-the-art algorithms perform poorly in some important repeated games due to an inability to make profitable compromises. To make these compromises, an agent must effectively balance competing objectives, including bounding losses, playing optimally with respect to current beliefs, and taking calculated, but profitable, risks. In this paper, we present, discuss, and analyze M-Qubed, a reinforcement learning algorithm designed to overcome these deficiencies by encoding and balancing best-response, cautious, and optimistic learning biases. We show that M-Qubed learns to make profitable compromises across a wide-range of repeated matrix games played with many kinds of learners. Specifically, we prove that M-Qubed’s average payoffs meet or exceed its maximin value in the limit. Additionally, we show that, in two-player games, M-Qubed’s average payoffs approach the value of the Nash bargaining solution in self play. Furthermore, it performs very well when associating with other learners, as evidenced by its robust behavior in round-robin and evolutionary tournaments of two-player games. These results demonstrate that an agent can learn to make good compromises, and hence receive high payoffs, in repeated games by effectively encoding and balancing best-response, cautious, and optimistic learning biases.  相似文献   

7.
In 1989 I made a drawing of a net on a cube, consisting of 12 lines/elements. They were connected in a way that, a couple of months later, I recognised them in 899v in Leonardo’s Codex Atlanticus. I don’t know which moment impressed me the most: my own discovery of a very simple and powerful connecting system or the discovery of the Leonardo drawings, which implied that my own discovery was in fact a rediscovery. What we see in Leonardo’s drawings are some examples of roof constructions built with a lot of straight elements. These drawings can be ‘translated’ into the following definition: On each element we define four points at some distance of each other – two points somewhere in the middle and two points closer to the ends. To make constructions with these elements we need only connect a middle point of one element to an end point of another one in a regular over-under pattern. Out of the simple definition of the elements, I designed many different patterns for my so-called “+ - - +” structures: domes, spheres, cylinders and other models were made.  相似文献   

8.
Many meta-heuristic methods have been applied to solve the two-sided assembly line balancing problem of type I with the objective of minimizing the number of stations, but some of them are very complex or intricate to be extended. In addition, different decoding schemes and different objectives have been proposed, leading to the different performances of these algorithms and unfair comparison. In this paper, two new decoding schemes with reduced search space are developed to balance the workload within a mated-station and reduce sequence-depended idle time. Then, graded objectives are employed to preserve the minor improvements on the solutions. Finally, a simple iterated greedy algorithm is extended for the two-sided assembly line balancing problem and modified NEH-based heuristic is introduced to obtain a high quality initial solution. And an improved local search with referenced permutation and reduced insert operators is developed to accelerate the search process. Computational results on benchmark problems prove the efficiency of the proposed decoding schemes and the new graded objectives. A comprehensive computational comparison among 14 meta-heuristics is carried out to demonstrate the efficiency of the improved iterated greedy algorithm.  相似文献   

9.
The task of balancing of assembly lines is of considerable industrial importance. It consists of assigning operations to workstations in a production line in such a way that (1) no assembly precedence constraint is violated, (2) no workstations in the line takes longer than a predefined cycle time to perform all tasks assigned to it, and (3) as few workstations as possible are needed to perform all the tasks in the set. This paper presents a new multiple objective simulated annealing (SA) algorithm for simple (line) and U type assembly line balancing problems with the aim of maximizing “smoothness index” and maximizing the “line performance” (or minimizing the number of workstations). The proposed algorithm makes use of task assignment rules in constructing feasible solutions. The proposed algorithm is tested and compared with literature test problems. The proposed algorithm found the optimal solutions for each problem in short computational times. A detailed performance analysis of the selected task assignment rules is also given in the paper.  相似文献   

10.
For all of the criticism of Eero Saarinen’s Kresge Auditorium and MIT Chapel, they exist as a highly focused moment of deliberate experimentation with geometric form in materials old and new which both contrasted the typical forms of rational modernism and resonated deeply with the modernist quest for the incorporation of novel structures. This paper explores the metaphorical and literal tensions through three dichotomies: geometrical ones with implications for acoustics, programmatic ones with implications for use, and structural ones between the appearances and actual structural actions of the architecture. I seek to illuminate how the geometric issues of both buildings relate to structural optimization. I also approach the Auditorium and Chapel from the roles of having been a performer and composer of instrumental and vocal music for both spaces while earning degrees at MIT in architecture. The simple act of listening defies one’s typical expectations in both spaces, and the dichotomies of geometry, use, and structure illuminate the relationship of sound to place in these architectural spaces.  相似文献   

11.
12.
The provision of high speed, reliable Internet access and the ability to support secure and flexible on-line systems for research, teaching and administration has become critical to the success of Australian Universities. An exponential growth in Internet traffic has led to ever increasing costs for the provision of these services at a time when most Australian Universities have been experiencing tighter budgetary conditions. Significantly, alongside these financial concerns, Universities have started to recognise the emergence of a range of other issues related directly to the nature of on-line behaviours engaged in by the diversity of users that Universities are now expected to support. These on-line behaviours are challenging Universities to find responses to balancing users’ right to privacy and freedom of speech with the need to protect against legal action arising from criminal, illegal or inappropriate behaviours by some users on University networks. As part of the responses being developed, many Universities have introduced Internet Management Systems (IMS), similar to the systems used by many Internet Services Providers (ISPs). This paper presents a case study on the experience of the University of Tasmania (UTAS) in introducing an IMS. The case study covers the period from the initial ‘call for proposals’ through to the deployment of the new IMS system. The paper highlights that decisions pertaining to the IMS systems have direct implications for balancing the competing rights, interests and requirements of different stakeholders. More specifically the case study highlights the impact of the changing nature of users’ relationships with the Internet and the need for vigilance on the part of users, network administrators, service providers and policy makers. The dangers of failing to get the right balance are presented and the paper argues for the importance of user education, change management and communication throughout the University and its broader community of users. The paper also briefly considers how Australia’s planned accession to the Council of Europe’s Convention on Cybercrime may impact on these issues. More broadly, this paper suggests that additional changes will emerge as IPV6, companies like Google and cloud computing architectures reconfigure individual users relationships with ‘their’ information and access to the Internet. These developments will continue to transform the meaning of concepts such as ownership and control, privacy and freedom of speech within and beyond on-line environments.  相似文献   

13.
手写输入可通过少量的书写进而传递丰富的文本信息,如何准确地对手写简笔画进行识别越来越引起了各界研究者们的关注。传统的简笔画识别算法多基于简笔画相对固定的结构特性进行识别。此种方法对于笔迹清晰、结构相对简单的简笔画具有较高的识别率,但是随着分类数以及简笔画自身结构复杂度的增加这种方法存在一定局限性,往往会造成误分类。为取得更好的识别效果,该研究以具有固定参照模板的简笔画作为研究对象,使用图像生成算法对手写笔迹进行预处理,并提出了一种基于卷积神经网络的简笔画识别模型(Trans-Net),其中运用迁移学习技术解决了样本库中数据量小的问题。实验结果表明,该方法能够对输入的简笔画笔迹进行有效地特征提取,并且对样本库中150类简笔画对象的平均识别精度达到了94.1%。  相似文献   

14.
Origami Platonic solids   总被引:2,自引:0,他引:2  
  相似文献   

15.
It is widely recognised that paper remains a pervasive resource for collaboration and yet there has been uncertain progress in developing technologies that aim to enhance paper documents with computational capabilities. In this article, we discuss the design of a technology that interweaves developments in hardware and materials, electronics and software, and seeks to create new affinities between digital content and paper. The design of the technology drew from findings from naturalistic studies of the uses of paper, particularly when considering how ‘users’ might ‘interact’ with the augmented technology. We briefly review these studies and discuss the results of an evaluation of the emerging technology. Analysis of the fine details of the conduct of participants in these assessments suggest how, even when we design simple forms of interaction with a device, these can be shaped and transformed by the participation and collaboration of others.  相似文献   

16.
 We study a new formulation of bisimulation for the π-calculus [MPW92], which we have called open bisimulation (∼). In contrast with the previously known bisimilarity equivalences, ∼ is preserved by allπ-calculus operators, including input prefix. The differences among all these equivalences already appear in the sublanguage without name restrictions: Here the definition of ∼ can be factorised into a “standard” part which, modulo the different syntax of actions, is the CCS bisimulation, and a part specific to the π-calculus, which requires name instantiation. Attractive features of ∼ are: A simple axiomatisation (of the finite terms), with a completeness proof which leads to the construction of minimal canonical representatives for the equivalence classes of ∼; an “efficient” characterisation, based on a modified transition system. This characterisation seems promising for the development of automated-verification tools and also shows the call-by-need flavour of ∼. Although in the paper we stick to the π-calculus, the issues developed may be relevant to value-passing calculi in general. Received: June 11, 1993/November 28, 1994  相似文献   

17.
This paper discusses the development, operation, and testing of a heuristic line balancing program that solves the Type I Line Balance problem using a microcomputer.

The line balancing method selected was based on previous studies that compared different line balancing techniques. The model is built primarily using the Hoffman (1963) procedure with modifications described by Gehrlein and Patterson (1975). For purposes of comparison the Rank Positional Weight technique (Helgeson and Birnie, 1961) is also included in the model.

Testing included thirty-seven different balances using problems from the literature. For each balance both Rank Positional Weight and Hoffmann solutions were obtained in the forward and reverse directions. Delay factors used for the Hoffmann balances were 0.0, 0.5, 1.0, 1.25, 1.5, and 2.0. Thus for each balance problem a total of fourteen tests were conducted. The total number of tests was 518.

Four measures of performance were considered in this study. These measures of performance were: (1) the average percent a balance is above the optimum solution, in terms of number of stations; (2) time to obtain a balance; (3) the best solution in terms of the lowest number of stations and lowest standard deviation of the slack times, and (4) the largest value of minimum station slack time. Overall it was found that the Modified Hoffmann procedure with a delay factor of 1.5 performed best. The time to obtain a balance using the Modified Hoffmann procedure is significantly less than the time for a solution without the modifications.  相似文献   


18.
We consider the problem of dynamic load balancing in arbitrary (connected) networks on n nodes. Our load generation model is such that during each round, n tasks are generated on arbitrary nodes, and then (possibly after some balancing) one task is deleted from every non-empty node. Notice that this model fully saturates the resources of the network in the sense that we generate just as many new tasks per round as the network is able to delete. We show that even in this situation the system is stable, in that the total load remains bounded (as a function of n alone) over time. Our proof only requires that the underlying “communication” graph be connected. (It of course also works if we generate less than n new tasks per round, but the major contribution of this paper is the fully saturated case.) We further show that the upper bound we obtain is asymptotically tight (up to a moderate multiplicative constant) by demonstrating a corresponding lower bound on the system load for the particular example of a linear array (or path). We also show some simple negative results (i.e., instability) for work-stealing based diffusion-type algorithms in this setting. A preliminary version of this paper entitled “Dynamic diffusion load balancing” was published in Proc. 32nd International Colloquium on Automata, Languages and Programming (ICALP’05), Lecture Notes in Computer Science 3580, Springer-Verlag, pp. 1386–1398. P. Berenbrink is supported by NSERC Discovery Grant 250284-2002. R. Martin is supported by EPSRC grant “Discontinuous Behaviour in the Complexity of Randomized Algorithms.”  相似文献   

19.
Over the past few years, the amount of electronic information available through the Internet has increased dramatically. Unfortunately, the search tools currently available for retrieving and filtering information in this space are not effective in balancing relevance and comprehensiveness. This paper analyzes the results of experiments in which HTML documents are searched with user models and software agents used as intermediaries to the search. Simple user models are first combined with search specifications (or ‘User Needs’), to define an Enhanced User Need. Then Uniform Resource Agents are constructed to filter information based on the EUN parameters. The results of searches using different agents are then compared to those obtained through a comparable simple keyword search, and it is shown that a user searching a pool of existing agents can obtain better search results than by conducting a traditional keyword search. This work thus demonstrates that the use of user models and information filtering agents do improve search results and may be used to improve Internet information retrieval. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

20.
Environmental issues in supply chains have been serious for not only the reduction of CO2 emissions but also for material starvation. End-of-life assembly products at recycling factories should be disassembled environmentally and economically in consideration of not only low carbonization but also of material circulation. Parts with higher recycling and CO2 saving rates should be selected and manually disassembled; however, the manual disassembly brings a higher cost, instead of disposing of the parts. Moreover, this parts selection also affects a disassembly line balancing.This study proposes a simple and practical modeling and design of multi criteria optimization for lower disassembly cost, higher recycling, and CO2 saving rates by an environmental and economic parts selection, and subsequent disassembly line balancing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号