首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Active queue management (AQM) is investigated to avoid incipient congestion in gateways to complement congestion control run by the transport layer protocol such as the TCP. Most existing work on AQM can be categorized as (1) ad-hoc event-driven control and (2) time-driven feedback control approaches based on control theory. Ad hoc event-driven approaches for congestion control, such as RED (random early detection), lack a mathematical model. Thus, it is hard to analyze their dynamics and tune the parameters. Time-driven control theoretic approaches based on solid mathematical models have drawbacks too. As they sample the queue length and run AQM algorithm at every fixed time interval, they may not be adaptive enough to an abrupt load surge. Further, they can be executed unnecessarily often under light loads due to the time-driven nature. To seamlessly integrate the advantages of both event-driven and control-theoretic time-driven approaches, we present an event-driven feedback control approach based on formal control theory. As our approach is based on a mathematical model, its performance is more analyzable and predictable than ad hoc event-driven approaches are. Also, it is more reactive to dynamic load changes due to its event-driven nature. Our simulation results show that our event-driven controller effectively maintains the queue length around the specified set-point. It achieves shorter E2E (end-to-end) delays and smaller E2E delay fluctuations than several existing AQM approaches, which are ad hoc event-driven and based on time-driven control theory, while achieving almost the same E2E delays and E2E delay fluctuations as the two other advanced control theoretic AQM approaches. Further, our AQM algorithm is invoked much less frequently than the tested baselines.  相似文献   

2.
For heterogeneous distributed computing systems, important design issues are scalability and system optimization. Given such systems, it is crucial to develop low computational complexity algorithms to schedule tasks in a manner that exploits the heterogeneity of the resources and applications. In this paper, we report and evaluate three scalable, and fast scheduling heuristics for highly heterogeneous distributed computing systems. We conduct a comprehensive performance evaluation study using simulation. The benchmarking outlines the performance of the schedulers, representing scalability, makespan, flowtime, computational complexity, and memory utilization. The set of experimental results shows that our heuristics perform as good as the traditional approaches, for makespan and flowtime, while featuring lower complexity, lower running time, and lower used memory. The experimental results also detail the various scenarios under which certain algorithms excel and fail.  相似文献   

3.
This study proposes a 3-phase solution approach for a multi-product parallel multi-stage cellular manufacturing company. The study focuses on a case study involving a shoe manufacturing plant in which products are produced according to their due dates. The investigated manufacturing process has three stages, namely lasting cells, rotary injection molding cells, finishing-packaging cells. System performance is measured based on total flowtime and makespan. We propose a 3-phase solution approach to tackle the problem; 1) the first phase of the proposed approach allocates manpower to operations in the lasting cells and finishing-packaging cells, independently. The objective is to maximize the production rates in these cells. 2) The second phase includes cell loading to determine product families based on a similarity coefficient using mathematical modeling and genetic algorithms (GA). The proposed GA algorithm for cell loading performs mutation prior to crossover, breaking from traditional genetic algorithm flow. The performance measures flow time and makespan are considered in this phase. 3) Flow shop scheduling is then performed to determine the product sequence in each (lasting, rotary injection molding, finishing-packaging) cell group. This 3-phase solution approached is repeated with alternative manpower level allocation to lasting and finishing-packaging cells where the total manpower level remains the same.  相似文献   

4.
Flexible job shop scheduling problem (FJSSP) is generalization of job shop scheduling problem (JSSP), in which an operation may be processed on more than one machine each of which has the same function. Most previous researches on FJSSP assumed that all jobs to be processed are available at the beginning of scheduling horizon. The assumption, however, is always violated in practical industries because jobs usually arrive over time and can not be predicted before their arrivals. In the paper, dynamic flexible job shop scheduling problem (DFJSSP) with job release dates is studied. A heuristic is proposed to implement reactive scheduling for the dynamic scheduling problem. An approach based on gene expression programming (GEP) is also proposed which automatically constructs reactive scheduling policies for the dynamic scheduling. In order to evaluate the performance of the reactive scheduling policies constructed by the proposed GEP-based approach under a variety of processing conditions three factors, such as the shop utilization, due date tightness, problem flexibility, are considered in the simulation experiments. The scheduling performance measure considered in the simulation is the minimization of makespan, mean flowtime and mean tardiness, respectively. The results show that GEP-based approach can construct more efficient reactive scheduling policies for DFJSSP with job release dates under a big range of processing conditions and performance measures in the comparison with previous approaches.  相似文献   

5.
This paper is motivated by the problem of meeting due dates in a flowshop production environment with jobs with different weights and uncertain processing times. Enforcement of a permutation schedule to varying degrees for dynamic flowshops is investigated with the goal of minimizing total weighted tardiness (TWT). The approaches studied are categorized as follows: (1) pure permutation scheduling (2) shift-based scheduling (3) pure dispatching (which leads to non-permutation sequences). A simulation-based experimental study was carried out to study the performance of the above methods with respect to minimizing TWT when new jobs arrive to the flowshop at every shift change. Results indicate significant gains in performance are possible when the permutation requirement is relaxed and shift-based scheduling is allowed. Shift-based scheduling yields competitive results with respect to the pure dispatching approach, even though dispatching has the advantage of a full relaxation of the permutation requirement.  相似文献   

6.
Relatively little job shop scheduling research has focused on the missed due-date performance. In this paper, we investigate how two important decision factors, namely dispatching rules and due-date assignment methods, affect the missed due-date performance in job shop scheduling. A new dispatching rule and two new due-date setting models are developed in order to improve the performance criteria based on missed due dates. The new due-date models are capable of adjusting dynamically the flowtime estimation by using feedback information about current shop load conditions. Simulation results show that the proposed dynamic due-date models are significantly better than their static counterparts. The best missed due-date performance is observed when a combination of a dynamic due-date setting model and an appropriate due-date-dependent dispatching rule is employed. In addition, the new models are simple and easy to implement without preliminary runs for parameter estimation.  相似文献   

7.
Dynamic flexible job shop scheduling problem is studied under the events such as new order arrivals, changes in due dates, machine breakdowns, order cancellations, and appearance of urgent orders. This paper presents a constructive algorithm which can solve FJSP and DFJSP with machine capacity constraints and sequence-dependent setup times, and employs greedy randomized adaptive search procedure (GRASP). Besides, Order Review Release (ORR) mechanism and order acceptance/rejection decisions are also incorporated into the proposed method in order to adjust capacity execution considering customer due date requirements. The lexicographic method is utilized to assess the objectives: schedule instability, makespan, mean tardiness and mean flow time. A group of experiments is also carried out in order to verify the suitability of the GRASP in solving the flexible job shop scheduling problem. Benchmark problems are formed for different problem scales with dynamic events. The event-driven rescheduling strategy is also compared with periodical rescheduling strategy. Results of the extensive computational experiment presents that proposed approach is very effective and can provide reasonable schedules under event-driven and periodic scheduling scenarios.  相似文献   

8.
In this paper, we propose an evolutionary method with a simulation model for scheduling jobs including operations specified in terms of workload rather than processing time. It is suggested that processing times should be determined according to the number of assigned resources rather than the workload. The simulation model is used to estimate the result of resource allocation in a time horizon based on preselected rules. The evolutionary methods improve a production schedule in terms of compliance with due dates by selecting an alternative resource allocation rule and changing timing constraints. The results of computational experiments show that compliance with due dates improved by as much as 30 % under the modified production schedule over the initial schedule.  相似文献   

9.
In this paper a class of single machine scheduling problems is discussed. It is assumed that job parameters, such as processing times, due dates, or weights are uncertain and their values are specified in the form of a discrete scenario set. The ordered weighted averaging (OWA) aggregation operator is used to choose an optimal schedule. The OWA operator generalizes traditional criteria used in decision making under uncertainty, such as the maximum, average, median, or Hurwicz criterion. It also allows us to extend the robust approach to scheduling by taking into account various attitudes of decision makers towards a risk. In this paper, a general framework for solving single machine scheduling problems with the OWA criterion is proposed and some positive and negative computational results for two basic single machine scheduling problems are provided.  相似文献   

10.
Single machine flow-time scheduling with a single breakdown   总被引:9,自引:0,他引:9  
Summary We consider the problem of scheduling tasks on a single machine to minimize the flowtime. The machine is subject to breakdowns during the processing of the tasks. The breakdowns occur at a random times and the machine is unavailable until it is repaired. The times for repair are random and independent of each other and of the breakdown process. A task that is preempted due to a breakdown must be restarted and otherwise preemptions are not allowed. We show in the case of a single breakdown that if the distribution function of the time to breakdown is concave then Shortest Processing Time (SPT) first scheduling stochastically minimizes the flowtime. For the case of multiple breakdowns we show that SPT minimizes the expected flowtime when the times to breakdown are exponentially distributed. If the time for a single breakdown is known before scheduling begins, and the processing times of the tasks are also known, then we show that the problem of deciding whether there is a schedule with flowtime less than or equal to a given value is NP-complete. Finally, we bound the performance of SPT scheduling in the deterministic case when there is a single breakdown.The work of this author was partially supported by The Lady Davis Fellowship Trust through a Lady Davis Visiting Professorship at the Technion, Haifa, Israel  相似文献   

11.
Manufacturing in a world class environment demands a high level of customer service. The production control department is responsible for achieving this high level of service through accurate planning and scheduling. The ability to achieve such a high level of customer service is limited by the scheduling tools currently available. Production planning is typically performed using MRP infinite capacity, fixed lead-time, backward scheduling. The work in each MRP time-bucket is then sequenced to develop a schedule. The production floor however, is not a static environment. Dynamic events, that cannot be scheduled, degrade production performance relative to the projected schedule. In this paper the relationship between dynamic events and schedule degradation is examined. Common approaches to production scheduling underestimate the effect of capacity loading relative to unplanned events and schedule achievability. These dynamic events exhaust capacity previously allocated to production orders. To hedge against such known, but unscheduled events, production control can schedule resources to a level less than full capacity. The size of the capacity hedge and the duration of the unplanned event dictate the time to recover from the backlog created by these dynamic events. A performance metric is developed to measure the ability to achieve customer promise dates. A machine loading policy is also presented to achieve the optimal capacity hedge point that will maximize this performance measure. The results are compared to simulated failures to examine the accuracy of the predicted performance degradation. The results suggest a trade-off of throughput for improved performance to customer promise date.  相似文献   

12.
The purpose of feature construction is to create new higher-level features from original ones. Genetic Programming (GP) was usually employed to perform feature construction tasks due to its flexible representation. Filter-based approach and wrapper-based approach are two commonly used feature construction approaches according to their different evaluation functions. In this paper, we propose a hybrid feature construction approach using genetic programming (Hybrid-GPFC) that combines filter’s fitness function and wrapper’s fitness function, and propose a multiple feature construction method that stores top excellent individuals during a single GP run. Experiments on ten datasets show that our proposed multiple feature construction method (Fcm) can achieve better (or equivalent) classification performance than the single feature construction method (Fcs), and our Hybrid-GPFC can obtain better classification performance than filter-based feature construction approaches (Filter-GPFC) and wrapper-based feature construction approaches (Wrapper-GPFC) in most cases. Further investigations on combinations of constructed features and original features show that constructed features augmented with original features do not improve the classification performance comparing with constructed features only. The comparisons with three state-of-art methods show that in majority of cases, our proposed hybrid multiple feature construction approach can achieve better classification performance.  相似文献   

13.
In this study, a mathematical programming approach is proposed to design a layered cellular manufacturing system in highly fluctuated demand environment. A mathematical model is developed to create dedicated, shared and remainder cells with the objective of minimizing the number of cells. In contrast with classical cellular manufacturing systems, in layered cellular systems, some cells can serve to multiple part families. A five-step hierarchical methodology is employed: (1) formation of part families, (2) calculation of expected cell utilizations and demand coverage probabilities, (3) specification cell types as dedicated, shared, and remainder cells, (4) simulation of proposed layered systems to evaluate their performance with respect to average flowtime and work-in-process inventory, and (5) statistical analysis to find the best layered cellular design among alternatives. It is found that designs with higher number of part families tend to have less number of machines. Similar results are also observed with respect to average flowtime and work-in-process inventory measures. The results are also compared with a heuristic approach from the literature. None of the approaches is dominant with respect to all of the performance measures. Mathematical modeling approach performs better in terms of number of machines for most of the alternative designs. However, heuristic approach yields better average flowtime and work-in-process inventory for most of the designs.  相似文献   

14.
Considering a dynamic single machine problem in which operations cannot be split, we first develop a decision theory based heuristic called DT-TD (Decision Theory-Tactically Delayed) of computational complexity O(n2). Using a simple look-ahead procedure, it produces, actically delayed (TD) schedules. We then develop a branch-and-bound (BB) algorithm (which uses DT-TD to obtain the initial upper bound) to obtain the optimum schedule. The optimum schedules are examined to identify conditions where TD schedules are necessary. Results based on 540 test problems suggest that TD schedules are important, for job shop scheduling under the range of conditions examined, when due dates are arbitrary and utilization is low. Additional test results indicate that the difference between the optimum schedule and the optimum non-delay schedule could be substantial. Finally, the performance of the DT-TD heuristic is analyzed by comparing its solution to the optimum solution obtained using the BB algorithm. The results indicate that the DT-TD heuristic is effective.  相似文献   

15.
This paper presents a novel divide-and-integrate strategy based approach for solving large scale job-shop scheduling problems. The proposed approach works in three phases. First, in contrast to traditional job-shop scheduling approaches where optimization algorithms are used directly regardless of problem size, priority rules are deployed to decrease problem scale. These priority rules are developed with slack due dates and mean processing time of jobs. Thereafter, immune algorithm is applied to solve each small individual scheduling module. In last phase, integration scheme is employed to amalgamate the small modules to get gross schedule with minimum makespan. This integration is carried out in dynamic fashion by continuously checking the preceding module's machine ideal time and feasible slots (satisfying all the constraint). In this way, the proposed approach will increase the machine utilization and decrease the makespan of gross schedule. Efficacy of the proposed approach has been tested with extremely hard standard test instances of job-shop scheduling problems. Implementation results clearly show effectiveness of the proposed approach.  相似文献   

16.
The event-driven programming pattern is pervasive in a wide range of modern software applications. Unfortunately, it is not easy to achieve good performance and responsiveness when developing event-driven applications. Traditional approaches require a great amount of programmer effort to restructure and refactor code, to achieve the performance speedup from parallelism and asynchronization. Not only does this restructuring require a lot of development time, it also makes the code harder to debug and understand. We propose an asynchronous programming model based on the philosophy of OpenMP, which does not require code restructuring of the original sequential code. This asynchronous programming model is complementary to the existing OpenMP fork-join model. The coexistence of the two models has potential to decrease developing time for parallel event-driven programs, since it avoids major code refactoring. In addition to its programming simplicity, evaluations show that this approach achieves good performance improvements consistent with more traditional event-driven parallelization.  相似文献   

17.
Message-oriented event-driven systems are becoming increasingly ubiquitous in many industry domains including telecommunications, transportation and supply chain management. Applications in these areas typically have stringent requirements for performance and scalability. To guarantee adequate quality-of-service, systems must be subjected to a rigorous performance and scalability analysis before they are put into production. In this paper, we present a comprehensive modeling methodology for message-oriented event-driven systems in the context of a case study of a representative application in the supply chain management domain. The methodology, which is based on queueing Petri nets, provides a basis for performance analysis and capacity planning. We study a deployment of the SPECjms2007 standard benchmark on a leading commercial middleware platform. A detailed system model is built in a step-by-step fashion and then used to predict the system performance under various workload and configuration scenarios. After the case study, we present a set of generic performance modeling patterns that can be used as building blocks when modeling message-oriented event-driven systems. The results demonstrate the effectiveness, practicality and accuracy of the proposed modeling and prediction approach.  相似文献   

18.
根据生灭过程的基本原理,从处理排队系统中特定的事件出发,设计并实现了一个基于事件驱动的排队系统仿真器。该仿真器能根据特定的系统配置,对实际数据进行分析和对特定过程进行模拟。仿真器包含的开发接口支持其模拟功能的扩展。实验表明,该仿真器能快速地完成特定的模拟过程,并获得与理论值非常接近的模拟结果,因此能用于对复杂排队问题的仿真和分析。  相似文献   

19.
With the increased acceptance of electronic health records, we can observe the increasing interest in the application of data mining approaches within this field. This study introduces a novel approach for exploring and comparing temporal trends within different in-patient subgroups, which is based on associated rule mining using Apriori algorithm and linear model-based recursive partitioning. The Nationwide Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality was used to evaluate the proposed approach. This study presents a novel approach where visual analytics on big data is used for trend discovery in form of a regression tree with scatter plots in the leaves of the tree. The trend lines are used for directly comparing linear trends within a specified time frame. Our results demonstrate the existence of opposite trends in relation to age and sex based subgroups that would be impossible to discover using traditional trend-tracking techniques. Such an approach can be employed regarding decision support applications for policy makers when organizing campaigns or by hospital management for observing trends that cannot be directly discovered using traditional analytical techniques.  相似文献   

20.
Web文本表示方法作为所有Web文本分析的基础工作,对文本分析的结果有深远的影响。提出了一种多维度的Web文本表示方法。传统的文本表示方法一般都是从文本内容中提取特征,而文档的深层次特征和外部特征也可以用来表示文本。本文主要研究文本的表层特征、隐含特征和社交特征,其中表层特征和隐含特征可以由文本内容中提取和学习得到,而文本的社交特征可以通过分析文档与用户的交互行为得到。所提出的多维度文本表示方法具有易用性,可以应用于各种文本分析模型中。在实验中,改进了两种常用的文本聚类算法——K-means和层次聚类算法,并命名为多维度K-means MDKM和多维度层次聚类算法MDHAC。通过大量的实验表明了本方法的高效性。此外,我们在各种特征的结合实验结果中还有一些深层次的发现。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号