首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到7条相似文献,搜索用时 0 毫秒
1.
The aggregation of objectives in multiple criteria programming is one of the simplest and widely used approach. But it is well known that this technique sometimes fail in different aspects for determining the Pareto frontier. This paper proposes a new approach for multicriteria optimization, which aggregates the objective functions and uses a line search method in order to locate an approximate efficient point. Once the first Pareto solution is obtained, a simplified version of the former one is used in the context of Pareto dominance to obtain a set of efficient points, which will assure a thorough distribution of solutions on the Pareto frontier. In the current form, the proposed technique is well suitable for problems having multiple objectives (it is not limited to bi-objective problems) and require the functions to be continuous twice differentiable. In order to assess the effectiveness of this approach, some experiments were performed and compared with two recent well known population-based metaheuristics namely ParEGO and NSGA II. When compared to ParEGO and NSGA II, the proposed approach not only assures a better convergence to the Pareto frontier but also illustrates a good distribution of solutions. From a computational point of view, both stages of the line search converge within a short time (average about 150 ms for the first stage and about 20 ms for the second stage). Apart from this, the proposed technique is very simple, easy to implement and use to solve multiobjective problems.  相似文献   

2.
Important efforts have been made in the last years to develop methods for the construction of Pareto frontiers that guarantee uniform distribution and that exclude the non-Pareto and local Pareto points. Nevertheless, these methods are susceptible of improvement or modifications to reach the same level of results more efficiently. This paper presents some of these possibilities, based on two types of techniques: those based on nonlinear optimization and those based on genetic algorithms. The first provides appropriate solutions at reasonable computational cost though they are highly dependent on the initial points and on the presence or absence of local minima. The second technique does not present such dependence although computational cost is higher. Since the construction of the Pareto frontier is usually off-line, that computational cost is not a restrictive factor. Goodness of the improvements proposed in the paper are shown with two bicriterion examples.  相似文献   

3.
In this paper a bi‐objective multi‐product model for the design of a production/distribution supply chain logistic network with four echelons is considered. The proposed optimization model minimizes the total cost of the network (including the fixed cost to open facilities and the transportation costs between them) and the total CO2 emissions. Five factors (network size, product complexity, cost variability, CO2 emissions generation and over‐capacity) are considered for the experimental framework. The problem is solved using the ε‐constraint method and the resulting Pareto frontiers (PF) are characterized using five new metrics specifically developed for analysing how those factors affect the resulting optimal configurations. The results show that over‐capacity and product complexity are the two most influential factors regarding the characteristics of the PF, and that their effects are in the same direction: more complexity and capacity mean a wider set of optima alternatives, some close to the ideal point, and in general with a smaller number of links used.  相似文献   

4.
This paper proposes a solution to calculate the Pareto frontier for the execution of a batch of jobs versus data transfer time for hybrid clouds. Based on the nature of the cloud application, jobs are assumed to require a number of data-files from either public or private clouds. For example, gene probes can be used to identify various infection agents such as bacteria, viruses, etc. The heavy computational task of aligning probes of a patient’s DNA (private-data) with normal sequences (public-data) with various data sizes is the key to this process. Such files have different characteristics–depends on their nature–and could be either allowed for replication or not in the cloud. Files could be too big to replicate (big data), others might be small enough to be replicated but they cannot be replicated as they contain sensitive information (private data). To show the relationship between the execution time of a batch of jobs and the transfer time needed for their required data in hybrid cloud, we first model this problem as a bi-objective optimization problem, and then propose a Particle Swarm Optimization (PSO)-based approach, called here PSO-ParFnt, to find the relevant Pareto frontier. The results are promising and provide new insights into this complex problem.  相似文献   

5.
A spatially explicit land use change model is typically based on the assumption that the relationship between land use change and its explanatory processes is stationary. This means that model structure and parameterization are usually kept constant over the model runtime, ignoring potential systemic changes in this relationship resulting from societal changes. We have developed a methodology to test for systemic changes and demonstrate it by assessing whether or not a land use change model with a constant model structure is an adequate representation of the land use system given a time series of observations of past land use. This was done by assimilating observations of real land use into a land use change model, using a Bayesian data assimilation technique, the particle filter. The particle filter was used to update the prior knowledge about the model structure, i.e. the selection and relative importance of the explanatory processes for land use change allocation, and about the parameters. For each point in time for which observations were available the optimal model structure and parameterization were determined. In a case study of sugar cane expansion in Brazil, it was found that the assumption of a constant model structure was not fully adequate, indicating systemic change in the modelling period (2003–2012). The systemic change appeared to be indirect: a factor has an effect on the demand for sugar cane, an input variable, in such a way that the transition rules and parameters have to change as well. Although an inventory was made of societal changes in the study area during the studied period, none of them could be directly related to the onset of the observed systemic change in the land use system. Our method which allows for systemic changes in the model structure resulted in an average increase in the 95% confidence interval of the projected sugar cane fractions of a factor of two compared to the assumption of a stationary system. This shows the importance of taking into account systemic changes in projections of land use change in order not to underestimate the uncertainty of future projections.  相似文献   

6.
An effective job shop scheduling (JSS) in the manufacturing industry is helpful to meet the production demand and reduce the production cost, and to improve the ability to compete in the ever increasing volatile market demanding multiple products. In this paper, a universal mathematical model of the JSS problem for apparel assembly process is constructed. The objective of this model is to minimize the total penalties of earliness and tardiness by deciding when to start each order’s production and how to assign the operations to machines (operators). A genetic optimization process is then presented to solve this model, in which a new chromosome representation, a heuristic initialization process and modified crossover and mutation operators are proposed. Three experiments using industrial data are illustrated to evaluate the performance of the proposed method. The experimental results demonstrate the effectiveness of the proposed algorithm to solve the JSS problem in a mixed- and multi-product assembly environment.  相似文献   

7.
This paper presents a flexible algorithm based on artificial neural networks (ANNs), genetic algorithms (GAs), and multivariate analysis for performance assessment and optimization of complex production units (CPUs) with respect to machinery productivity indicators (MPIs). Multivariate techniques include data envelopment analysis (DEA), principal component analysis (PCA) and numerical taxonomy (NT). Two case studies are considered to show the applicability of the proposed approach. In the first case, the machinery productivity indicators are categorized into four standard classes as availability, machinery stoppage, random failure and value added and production value. In the second case, the productivity of production units in terms of health, safety, environment and ergonomics indicators is evaluated. The flexible algorithm is capable of handling both linearity and complexity of data sets. Moreover, ANN and GA are efficiently applied to cover nonlinearity and complexity of CPUs. The results are also validated and verified by the internal mechanism of the algorithm. The algorithm is applied to a large set of production units to show its superiority and applicability over conventional approaches. Results show that, in the case of having non-linear data sets, ANN outperforms GA and conventional approaches. The flexible algorithm of this study may be easily extended to other units for assessment and optimization of CPUs with respect to machinery indicators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号