首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 507 毫秒
1.
In this paper, we present a memetic algorithm (MA) for solving the uncapacitated single allocation hub location problem (USAHLP). Two efficient local search heuristics are designed and implemented in the frame of an evolutionary algorithm in order to improve both the location and allocation part of the problem. Computational experiments, conducted on standard CAB/AP hub data sets (Beasley in J Global Optim 8:429–433, 1996) and modified AP data set with reduced fixed costs (Silva and Cunha in Computer Oper Res 36:3152–3165, 2009), show that the MA approach is superior over existing heuristic approaches for the USAHLP. For several large-scale AP instances up to 200 nodes, the MA improved the best-known solutions from the literature until now. Numerical results on instances with 300 and 400 nodes introduced in Silva and Cunha (Computer Oper Res 36:3152–3165, 2009) show significant improvements in the sense of both solution quality and CPU time. The robustness of the MA was additionally tested on a challenging set of newly generated large-scale instances with 520–900 nodes. To the best of our knowledge, these are the largest USAHLP problem dimensions solved in the literature until now. In addition, in this paper, we report for the first time optimal solutions for 30 AP and modified AP instances.  相似文献   

2.
Data envelopment analysis (DEA) is a data-driven non-parametric approach for measuring the efficiency of a set of decision making units (DMUs) using multiple inputs to generate multiple outputs. Conventionally, DEA is used in ex post evaluation of actual performance, estimating an empirical best-practice frontier using minimal assumptions about the shape of the production space. However, DEA may also be used prospectively or normatively to allocate resources, costs and revenues in a given organization. Such approaches have theoretical foundations in economic theory and provide a consistent integration of the endowment-evaluation-incentive cycle in organizational management. The normative use, e.g. allocation of resources or target setting, in DEA can be based on different principles, ranging from maximization of the joint profit (score), combinations of individual scores or game-theoretical settings. In this paper, we propose an allocation mechanism that is based on a common dual weights approach. Compared to alternative approaches, our model can be interpreted as providing equal endogenous valuations of the inputs and outputs in the reference set. Given that a normative use implicitly assumes that there exists a centralized decision-maker in the organization evaluated, we claim that this approach assures a consistent and equitable internal allocation. Two numerical examples are presented to illustrate the applicability of the proposed method and to contrast it with earlier work.  相似文献   

3.
Some simple yet pragmatic methods of consistency test are developed to check whether an interval fuzzy preference relation is consistent. Based on the definition of additive consistent fuzzy preference relations proposed by Tanino (Fuzzy Sets Syst 12:117–131, 1984), a study is carried out to examine the correspondence between the element and weight vector of a fuzzy preference relation. Then, a revised approach is proposed to obtain priority weights from a fuzzy preference relation. A revised definition is put forward for additive consistent interval fuzzy preference relations. Subsequently, linear programming models are established to generate interval priority weights for additive interval fuzzy preference relations. A practical procedure is proposed to solve group decision problems with additive interval fuzzy preference relations. Theoretic analysis and numerical examples demonstrate that the proposed methods are more accurate than those in Xu and Chen (Eur J Oper Res 184:266–280, 2008b).  相似文献   

4.
An article entitled: “A Note on Modeling Multiple Choice Requirements for Simple Mixed Integer Programming Solvers” was published by Ogryczak (Comput. Oper. Res. 23 (1996) 199). In this article, Ogryczak proposed a reformulation technique called special ordered inequalities (SOI) to model the non-convex programming problems with special ordered sets (SOS) of variables. The SOI technique appears to be analogous to the reformulation technique introduced by Bricker (AIIE Trans. 9 (1977) 105) and is related to the reformulation and transformation technique (RTT) developed by Lin and Bricker (Eur. J. Oper. Res. 55(2) (1991) 228); Lin and Bricker (Eur. J. Oper. Res. 88 (1996) 182). Since none of this literature was cited in the references of Ogryczak (Comput. Oper. Res. 23 (1996) 199), we would like to use this note to differentiate SOI and RTT and to elaborate their connection.Scope and purposeIn the context of non-convex programming, two major types of special ordered sets (SOS) of variables have been identified and studied by researchers. SOS1 are sets of non-negative variables where, for each set, at most one of the variables can be non-zero in the final solution. The most common application of SOS1 is multiple choice programming (MCP) which can be found in the modeling of many integer programming problems in location, distribution, scheduling, etc. SOS2 requires that, for each set, at most two of the variables can be non-zero in the final solution and, if they are, they must be adjacent. SOS2 has been widely used in separable programming to model non-linear functions using sets of piece-wise linear functions. Bricker introduced an explicit reformulation technique for SOS in 1977. Lin and Bricker developed a reformulation and transformation technique (RTT) to implicitly compose the optimal Simplex tableau for MCP in 1991. They also elaborated upon it with a computational report in 1996. Without citing the work by Bricker, or that by Lin and Bricker, Ogryczak proposed an analogous reformulation technique called special ordered inequalities (SOI) for SOS in 1996. This note aims to elaborate upon the connection between SOI and RTT as the supplementary information for the future research in SOS.  相似文献   

5.
In this paper we present a new Benders decomposition method for solving stochastic complementarity problems based on the work by Fuller and Chung (Comput Econ 25:303–326, 2005; Eur J Oper Res 185(1):76–91, 2007). A master and subproblem are proposed both of which are in the form of a complementarity problem or an equivalent variational inequality. These problems are solved iteratively until a certain convergence gap is sufficiently close to zero. The details of the method are presented as well as an extension of the theory from Fuller and Chung (2005, 2007). In addition, extensive numerical results are provided based on an electric power market model of Hobbs (IEEE Trans Power Syst 16(2):194–202, 2001) but for which stochastic elements have been added. These results validate the approach and indicate dramatic improvements in solution times as compared to solving the extensive form of the problem directly.  相似文献   

6.
Chen and Tsai [Eur J Oper Res 212:386–397, 2011] proposed a method to find the lower and upper bound of α-cut of the minimum total fuzzy crash cost and optimal fuzzy activity times of the project networks in fuzzy environments and used the values of lower and upper bounds, corresponding to different values of α, to obtain the minimum total fuzzy crash cost and the optimal fuzzy activity times. In this paper, it is pointed out that in the α-cut of optimal fuzzy activity times, obtained by using the existing method, the lower bound is not necessarily less than the upper bound. Also, some modifications in the existing method are suggested so that in the α-cut of optimal fuzzy activity time, the lower bound is always less than or equal to upper bound.  相似文献   

7.
Recently, remanufacturing systems have been studied from various viewpoints. Van der Laan and Teunter (Eur J Oper Res 175(2):1084–1102, 2006), for example, proposed simple heuristics for push and pull remanufacturing strategies. However, because they are only simple heuristics they are not very useful in a stochastic demand situation. An adaptive strategy should be incorporated into the pull strategy to improve performance; therefore, we propose an adaptive pull strategy for remanufacturing systems that can control manufacturing and remanufacturing rates in the remanufacturing system. The performance and effectiveness of our proposed system is analyzed by Markov analysis, and the results are shown in this paper.  相似文献   

8.
This paper explores the use of the optimisation procedures in SAS/OR software with application to the measurement of efficiency and productivity of decision-making units (DMUs) using data envelopment analysis (DEA) techniques. DEA was originally introduced by Charnes et al. [J. Oper. Res. 2 (1978) 429] is a linear programming method for assessing the efficiency and productivity of DMUs. Over the last two decades, DEA has gained considerable attention as a managerial tool for measuring performance of organisations and it has widely been used for assessing the efficiency of public and private sectors such as banks, airlines, hospitals, universities and manufactures. As a result, new applications with more variables and more complicated models are being introduced.Further to successive development of DEA a non-parametric productivity measure, Malmquist index, has been introduced by Fare et al. [J. Prod. Anal. 3 (1992) 85]. Employing Malmquist index, productivity growth can be decomposed into technical change and efficiency change.On the other hand, the SAS is a powerful software and it is capable of running various optimisation problems such as linear programming with all types of constraints. To facilitate the use of DEA and Malmquist index by SAS users, a SAS/MALM code was implemented in the SAS programming language. The SAS macro developed in this paper selects the chosen variables from a SAS data file and constructs sets of linear-programming models based on the selected DEA. An example is given to illustrate how one could use the code to measure the efficiency and productivity of organisations.  相似文献   

9.
Given a graph with a cost and a delay on each edge, Restricted Shortest Path (RSP) aims to find a min-cost s-t path subject to an end-to-end delay constraint. The problem is NP-hard. In this note we present an FPTAS with an improved running time of O(mn/ε) for acyclic graphs, where m and n denote the number of edges and nodes in the graph. Our algorithm uses a scaling and rounding technique similar to that of Hassin [Math. Oper. Res. 17 (1) (1992) 36-42]. The novelty of our algorithm lies in its “adaptivity”. During each iteration of our algorithm the approximation parameters are fine-tuned according to the quality of the current solution so that the running time is kept low while progress is guaranteed at each iteration. Our result improves those of Hassin [Math. Oper. Res. 17 (1) (1992) 36-42], Phillips [Proc. 25th Annual ACM Symposium on the Theory of Computing, 1993, pp. 776-785], and Raz and Lorenz [Technical Report, 1999].  相似文献   

10.
This paper considers co-investments in a supply chain infrastructure using an inter-temporal investment model. We assume that the supply chain firms?? capital consists essentially of an investment in the supply chain??s infrastructure. As a result, firms?? policies consist in selecting both an optimal level of employment and the level of co-investment in the supply chain infrastructure. Recent papers by Kogan and Tapiero (Eur J Oper Res 2009; Supply chain games: Operations management and risk valuation. Springer, Boston 2007) have presented open-loop and feedback solutions for non-cooperating firms and have shown that these solutions differ from a unique system-wide optimal solution, which maximizes the overall supply chain profit. To overcome this problem and thereby improve the supply chain performance, this paper suggests a coordination approach. Such an approach is consistent with a recent practice which consists in the creation of a supply chain shared capital (or joint funding of selected activities) with a temporal reward (or penalties) offered to non-cooperating firms for each dollar investment they make. In addition, this paper provides a closed form expression for the time sensitive rewards function expressed in terms of the system parameters. We show that when these rewards are offered, the Nash co-investment equilibrium is equal to the system-wide optimal solution.  相似文献   

11.
Fuzzy regression using least absolute deviation estimators   总被引:1,自引:1,他引:0  
In fuzzy regression, that was first proposed by Tanaka et al. (Eur J Oper Res 40:389–396, 1989; Int Cong Appl Syst Cybern 4:2933–2938, 1980; IEEE Trans SystMan Cybern 12:903–907, 1982), there is a tendency that the greater the values of independent variables, the wider the width of the estimated dependent variables. This causes a decrease in the accuracy of the fuzzy regression model constructed by the least squares method. This paper suggests the least absolute deviation estimators to construct the fuzzy regression model, and investigates the performance of the fuzzy regression models with respect to a certain errormeasure. Simulation studies and examples show that the proposed model produces less error than the fuzzy regression model studied by many authors that use the least squares method when the data contains fuzzy outliers.  相似文献   

12.
《国际计算机数学杂志》2012,89(16):3380-3393
This paper is concerned with a variant of the multiple knapsack problem (MKP), where knapsacks are available by paying certain ‘costs’, and we have a fixed budget to buy these knapsacks. Then, the problem is to determine the set of knapsacks to be purchased, as well as to allocate items into the accepted knapsacks in such a way as to maximize the net total profit. We call this the budget-constrained MKP and present a branch-and-bound algorithm to solve this problem to optimality. We employ the Lagrangian relaxation approach to obtain an upper bound. Together with the lower bound obtained by a greedy heuristic, we apply the pegging test to reduce the problem size. Next, in the branch-and-bound framework, we make use of the Lagrangian multipliers obtained above for pruning subproblems, and at each terminal subproblem, we solve MKP exactly by calling the MULKNAP code [D. Pisinger, An exact algorithm for large multiple knapsack problem, European J. Oper. Res. 114 (1999), pp. 528–541]. Thus, we were able to solve test problems with up to 160,000 items and 150 knapsacks within a few minutes in our computing environment. However, solving instances with relatively large number of knapsacks, when compared with the number of items, still remains hard. This is due to the weakness of MULKNAP to this type of problems, and our algorithm inherits this weakness as well.  相似文献   

13.
This paper focuses on the problem of how to divide a fixed cost as a complement to an original input among decision‐making units (DMUs) equitably. Using the data envelopment analysis (DEA) technique, this paper concerns the problem from the perspective of efficiency analysis. It is found that not all DMUs can become efficient under common weights if a low enough fixed cost is assigned. Therefore, the global modified additive DEA (MAD) model is introduced. By optimizing the global MAD‐efficiency, a new allocation method and the corresponding algorithm to ensure the uniqueness of the allocation result is designed. The proposed method can be used under both constant returns to scale and variable returns to scale for nonnegative data; it is suitable for the situation where the costs play a great role in the production of DMUs. Numerical results show the validity and advantages of our method.  相似文献   

14.
Many types of facility location/allocation models have been developed to find optimal spatial patterns with respect to various location criteria that include cost, time, coverage, and access among others. In this paper we develop and test location modeling formulations that utilize data envelopment analysis (DEA) efficiency measures to find optimal and efficient facility location/allocation patterns. We believe that solving for the DEA efficiency measure, simultaneously with other location modeling objectives, provides a promising rich approach to multiobjective location problems.  相似文献   

15.
In this note we observe that the problem of mixed graph coloring can be solved in linear time for trees, which improves the quadratic algorithm of Hansen et al. [P. Hansen, J. Kuplinsky, D. de Werra, Mixed graph colorings, Math. Methods Oper. Res. 45 (1997) 145-160].  相似文献   

16.

Purpose

The objective of this study is to optimize task scheduling and resource allocation using an improved differential evolution algorithm (IDEA) based on the proposed cost and time models on cloud computing environment.

Methods

The proposed IDEA combines the Taguchi method and a differential evolution algorithm (DEA). The DEA has a powerful global exploration capability on macro-space and uses fewer control parameters. The systematic reasoning ability of the Taguchi method is used to exploit the better individuals on micro-space to be potential offspring. Therefore, the proposed IDEA is well enhanced and balanced on exploration and exploitation. The proposed cost model includes the processing and receiving cost. In addition, the time model incorporates receiving, processing, and waiting time. The multi-objective optimization approach, which is the non-dominated sorting technique, not with normalized single-objective method, is applied to find the Pareto front of total cost and makespan.

Results

In the five-task five-resource problem, the mean coverage ratios C(IDEA, DEA) of 0.368 and C(IDEA, NSGA-II) of 0.3 are superior to the ratios C(DEA, IDEA) of 0.249 and C(NSGA-II, IDEA) of 0.288, respectively. In the ten-task ten-resource problem, the mean coverage ratios C(IDEA, DEA) of 0.506 and C(IDEA, NSGA-II) of 0.701 are superior to the ratios C(DEA, IDEA) of 0.286 and C(NSGA-II, IDEA) of 0.052, respectively. Wilcoxon matched-pairs signed-rank test confirms there is a significant difference between IDEA and the other methods. In summary, the above experimental results confirm that the IDEA outperforms both the DEA and NSGA-II in finding the better Pareto-optimal solutions.

Conclusions

In the study, the IDEA shows its effectiveness to optimize task scheduling and resource allocation compared with both the DEA and the NSGA-II. Moreover, for decision makers, the Gantt charts of task scheduling in terms of having smaller makespan, cost, and both can be selected to make their decision when conflicting objectives are present.  相似文献   

17.
Inaccuracies in calculated product costs have existed since the development of costing systems. A key contributor to the issue is the use of inappropriate bases for the application of overhead costs. This research proposes and provides preliminary evaluation in a virtual environment for a new allocation base that is believed to be better matched to the consumption rate of the indirect costs being allocated. Using a generalized manufacturing operational framework incorporating multi-period simulation, this research investigates the relationship between allocated cost categories and production or sales order activity. The existing cost allocation methods of full absorption and activity based costing (ABC) are used for comparison to the proposed method. Results show that at the aggregate reporting level (that is, income statement), the use of sales order or production order activity as an allocation base tracks closely with performance levels experienced using more traditional allocation bases. However, the results indicate that the impact on calculated product costs would influence decision making within a firm in terms of sales emphasis, mix, and markets in which to expand and from which to exit. This approach toward cost allocation would be equal to other Enterprise Resource Planning system based solutions in terms of simplicity of maintenance while offering product cost accuracies relatively equal with unit-level focused ABC systems, without requiring the substantial maintenance costs.  相似文献   

18.
The main focus of this paper is on a Shapley value for multichoice games introduced by van den Nouweland et al. (ZOR–Math. Meth. Oper. Res. 41?:?289–311, 1995). Here we provide several characterizations from traditional game theory and redefine them in the framework of multichoice games. Meanwhile, the relationship between core and this Shapley value for multichoice games is discussed. When multichoice games are convex, this Shapley value is a multichoice population monotonic allocation scheme (MPMAS).  相似文献   

19.
In this paper, the optimal (N,T)-policy for M/G/1 system with cost structure is studied. The system operates only intermittently. It is shut down when no customers are present. A fixed set-up cost of K>0 is incurred each time the system is reopened. Also, a holding cost of h>0 per unit time is incurred for each customer present. The (N,T)-policy studied for this system is as follows: the system reactivates as soon as N customers are present or the waiting time of the leading customer reaches a predefined time T (see A.S. Alfa, I. Frigui, Eur. J. Oper. Res. 88 (1996) 599-613; Y.N. Doganata, in: E. Arikan (Ed.), Communication, Control, and Signal Processing, 1990, pp. 1663–1669). Later on, as a comparison, the start of the timer count is relaxed as follows: the system reactivates as soon as N customers are present or the time units after the end of the last busy period reaches a predefined time T. For both cases, the explicit optimal policy (N*,T*) for minimizing the long-run average cost per unit time are obtained. As extreme cases, we include the simple optimal policies for N-and T-polices. Several counter-intuitive results are obtained about the optimal T-policies for both types of models.  相似文献   

20.
It has long been assumed that the shortages in inventory systems are either completely backlogged or totally lost. However, it is more reasonable to characterize that the longer the waiting time for the next replenishment, the smaller the backlogging rate would be. Moreover, the opportunity cost due to lost sales should be considered since some customers would not like to wait for backlogging during the shortage periods. Without considering these two realistic conditions, study on the inventory modeling for deteriorating items with shortages and partial backlogging cannot be complete and general. In the present article we define an appropriate time-dependent partial backlogging rate and introduce the opportunity cost due to lost sales. Numerical examples are also presented to illustrate the effects of changes in backlogging parameter and unit opportunity cost on total cost and the optimal number of replenishments.Scope and purposeIn a recent article published in this Journal, Giri et al. (Comput. Oper. Res. 27 (2000) 495–505) implemented an existing procedure to the inventory problem of Hariga and Al-Alyan (Comput. Oper. Res. 24 (1997) 1075–83) which concerns with lot-sizing heuristic for deteriorating items with shortages allowed in all cycles except the last one. Giri et al. deviated from the traditional practice and suggested a new policy allowing shortages in all cycles over a finite planning horizon. Their numerical results indicated the proposed policy is cheaper to operate with a cost reduction up to 15%. However, they did not consider the opportunity cost due to lost sales that happen because customers would not like to wait for backlogging. Moreover, for many products with growing sales, the length of the waiting time for the next replenishment is the main factor for determining whether the backlogging will be accepted or not, and the backlogging rate is expected to be time-dependent. Thus the assumption made in Giri et al. that the backlogging rate is a fixed fraction of the total amount of shortages is not reasonable.The purpose of this paper is to present a more realistic discussion of the inventory problem for deteriorating items with time-varying demands and shortages over a finite planning horizon. In contrast to the model by Giri et al., we define an appropriate partial backlogging rate and introduce the opportunity cost due to lost sales. We attempt to complement their model as a practical and general solution for inventory replenishment problems. With these extensions, the scope of applications of the present results is expanded.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号