首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Based on the constraints and frame conditions given by the real processes the production in bakeries can be modelled as a no-wait permutation flow-shop, following the definitions in scheduling theory. A modified genetic algorithm, ant colony optimization and a random search procedure were used to analyse and optimize the production planning of a bakery production line that processes 40 products on 26 production stages. This setup leads to 8.2 × 1047 different possible schedules in a permutation flow-shop model and is thus not solvable in reasonable time with exact methods. Two objective functions of economical interest were analysed, the makespan and the total idle time of machines. In combination with the created model, the applied algorithms proved capable to provide optimized results for the scheduling operation within a predefined runtime of 15 min, reducing the makespan by up to 8.6% and the total idle time of machines by up to 23%.  相似文献   

2.
Cost of testing activities is a major portion of the total cost of a software. In testing, generating test data is very important because the efficiency of testing is highly dependent on the data used in this phase. In search-based software testing, soft computing algorithms explore test data in order to maximize a coverage metric which can be considered as an optimization problem. In this paper, we employed some meta-heuristics (Artificial Bee Colony, Particle Swarm Optimization, Differential Evolution and Firefly Algorithms) and Random Search algorithm to solve this optimization problem. First, the dependency of the algorithms on the values of the control parameters was analyzed and suitable values for the control parameters were recommended. Algorithms were compared based on various fitness functions (path-based, dissimilarity-based and approximation level + branch distance) because the fitness function affects the behaviour of the algorithms in the search space. Results showed that meta-heuristics can be effectively used for hard problems and when the search space is large. Besides, approximation level + branch distance based fitness function is generally a good fitness function that guides the algorithms accurately.  相似文献   

3.
The Particle Swarm Optimization (PSO) is a simple, yet very effective, population-based search algorithm. However, degradation of the population diversity in the late stages of the search, or stagnation, is the PSO's major drawback. Most of the related recent research efforts are concentrated on alleviating this drawback. The direct solution to this problem is to introduce modifications which increase exploration; however it is difficult to maintain the balance of exploration and exploitation of the search process with this approach. In this paper we propose the decoupling of exploration and exploitation using a team-oriented search. In the proposed algorithm, the swarm is divided into two independent teams or sub swarms; each dedicated to a particular aspect of search. A simple but effective local search method is proposed for exploitation and an improvised PSO structure is used for exploration. The validation is conducted using a wide variety of benchmark functions which include shifted and rotated versions of popular test functions along with recently proposed composite functions and up to 1000 dimensions. The results show that the proposed algorithm provides higher quality solution with faster convergence and increased robustness compared to most of the recently modified or hybrid algorithms based on PSO. In terms of algorithm complexity, TOSO is slightly more complex than PSO but much less complex than CLPSO. For very high dimensions (D > 400), however, TOSO is the least complex compared to both PSO and CLPSO.  相似文献   

4.
This paper will introduce the Monte Carlo-based heuristic with seven local searches (LSs) which are carefully designed for distributed production network scheduling. Distributed production network consists of the number of different individual factories that joins together to form a network, in which these factories can operate more economically than operating individually and each individual factory usually focuses on self-benefits. Some realistic features such as heterogeny of factories and the transportation among factories are incorporated in our problem definition. However, in such network, each individual factory usually focuses on self-benefits and it plans to optimize its own profit. In this problem, among F exit factories in the network, F′ factories are interested in the total earliness costs and the remaining factories (F = F  F′) are interested in the total tardiness cost. Cost minimization is achieved through the minimization of earliness in F′factories, tardiness in F″ factories and the total costs of operation time of all jobs. This algorithm initializes with best known non-cooperative local scheduling and then the LSs search simultaneously through the same solution space, starting from the same current solution. Upon receiving the solutions from the LSs, the new solution will be accepted based on the Monte Carlo acceptance criterion. This criterion always accepts an improved solution and, in order to escape local minima, accept the worse solutions with a certain probability, which this probability decreases with deteriorating solutions. After solving the mixed integer linear programming by the CPLEX solver in the small-size instances, the results obtained by heuristic are compared with two genetic algorithms in the large-size instances. The results of the scheduling before cooperation in production network were also considered in the experiments.  相似文献   

5.
Over the last years, an increasing number of distributed resources have been connected to the power system due to the ambitious environmental targets, which resulted into a more complex operation of the power system. In the future, an even larger number of resources is expected to be coupled which will turn the day-ahead optimal resource scheduling problem into an even more difficult optimization problem. Under these circumstances, metaheuristics can be used to address this optimization problem. An adequate algorithm for generating a good initial solution can improve the metaheuristic’s performance of finding a final solution near to the optimal than using a random initial solution. This paper proposes two initial solution algorithms to be used by a metaheuristic technique (simulated annealing). These algorithms are tested and evaluated with other published algorithms that obtain initial solution. The proposed algorithms have been developed as modules to be more flexible their use by other metaheuristics than just simulated annealing. The simulated annealing with different initial solution algorithms has been tested in a 37-bus distribution network with distributed resources, especially electric vehicles. The proposed algorithms proved to present results very close to the optimal with a small difference between 0.1%. A deterministic technique is used as comparison and it took around 26 h to obtain the optimal one. On the other hand, the simulated annealing was able of obtaining results around 1 min.  相似文献   

6.
The human liver is one of the major organs in the body and liver disease can cause many problems in human life. Fast and accurate prediction of liver disease allows early and effective treatments. In this regard, various data mining techniques help in better prediction of this disease. Because of the importance of liver disease and increase the number of people who suffer from this disease, we studied on liver disease through using two well-known methods in data mining area.In this paper, novel decision tree based algorithms is used which leads to considering more factors in general and predictions with high accuracy compared to other studies in liver disease. In this application, 583 UCI instances of liver disease dataset from the UCI repository are considered. This dataset consists of 416 records of liver disease and 167 records of healthy liver. This dataset is analyzed by two algorithms named Boosted C5.0 and CHAID algorithms. Until now there is no work in the literature that uses boosted C5.0 and CHAID for creating the rules in liver disease. Our results show that in both algorithms, the DB, ALB, SGPT, TB and A/G factors have a significant impact on predicting liver disease which according to the rules generated by both algorithms important ranges are DB = [10.900–1.200], ALB [4.00–4.300], SGPT = [34–37], TB = [0.600–1.200] (by boosted C5.0), A/G = [1.180–1.390], as well as in the Boosted C5.0 algorithm, Alkphos, SGOT and Age have significant impact in prediction of liver disease. By comparing the performance of these algorithms, it becomes clear that C5.0 algorithm via Boosting technique has an accuracy of 93.75% and this result reveals that it has a better performance than the CHAID algorithm which is 65.00%. Another important achievement of this paper is about the ability of both algorithms to produce rules in one class for liver disease. The results of our assessment show that Boosted C5.0 and CHAID algorithms are capable to produce rules for liver disease. Our results also show that boosted C5.0 considers the gender in liver disease, a factor which is missing in many other studies. Meanwhile, using the rules generated in boosted C5.0 algorithm, we obtained the important result about low susceptibility of female to liver disease than male. This factor is missing in other studies of liver disease. Therefore, our proposed computer-aided diagnostic methods as an expert and intelligent system have impressive impact on liver disease detection. Based on obtained results, we observed that our model had better performance compared to existing methods in the literature.  相似文献   

7.
We present optimal algorithms for single-machine scheduling problems with earliness criteria and job rejection and compare them with the algorithms for the corresponding problems with tardiness objectives. We present an optimal O(n log n) algorithm for minimizing the maximum earliness on a single machine with job rejection. Our algorithm also solves the bi-criteria scheduling problem is which the objective is to simultaneously minimize the maximum earliness of the scheduled jobs and the total rejection cost of the rejected jobs. We also show that the optimal pseudo-polynomial time algorithm for the total tardiness problem with job rejection can be used to solve the corresponding total earliness problem with job rejection.  相似文献   

8.
Sylvester’s identity is a well-known identity that can be used to prove that certain Gaussian elimination algorithms are fraction free. In this paper we will generalize Sylvester’s identity and use it to prove that certain random Gaussian elimination algorithms are fraction free. This can be used to yield fraction free algorithms for solving Ax = b(x   0) and for the simplex method in linear programming.  相似文献   

9.
Protein thermostability information is closely linked to commercial production of many biomaterials. Recent developments have shown that amino acid composition, special sequence patterns and hydrogen bonds, disulfide bonds, salt bridges and so on are of considerable importance to thermostability. In this study, we present a system to integrate these various factors that predict protein thermostability. In this study, the features of proteins in the PGTdb are analyzed. We consider both structure and sequence features and correlation coefficients are incorporated into the feature selection algorithm. Machine learning algorithms are then used to develop identification systems and performances between the different algorithms are compared. In this research, two features, (E + F + M + R)/residue and charged/non-charged, are found to be critical to the thermostability of proteins. Although the sequence and structural models achieve a higher accuracy, sequence-only models provides sufficient accuracy for sequence-only thermostability prediction.  相似文献   

10.
Various sensory and control signals in a Heating Ventilation and Air Conditioning (HVAC) system are closely interrelated which give rise to severe redundancies between original signals. These redundancies may cripple the generalization capability of an automatic fault detection and diagnosis (AFDD) algorithm. This paper proposes an unsupervised feature selection approach and its application to AFDD in a HVAC system. Using Ensemble Rapid Centroid Estimation (ERCE), the important features are automatically selected from original measurements based on the relative entropy between the low- and high-frequency features. The materials used is the experimental HVAC fault data from the ASHRAE-1312-RP datasets containing a total of 49 days of various types of faults and corresponding severity. The features selected using ERCE (Median normalized mutual information (NMI) = 0.019) achieved the least redundancies compared to those selected using manual selection (Median NMI = 0.0199) Complete Linkage (Median NMI = 0.1305), Evidence Accumulation K-means (Median NMI = 0.04) and Weighted Evidence Accumulation K-means (Median NMI = 0.048). The effectiveness of the feature selection method is further investigated using two well-established time-sequence classification algorithms: (a) Nonlinear Auto-Regressive Neural Network with eXogenous inputs and distributed time delays (NARX-TDNN); and (b) Hidden Markov Models (HMM); where weighted average sensitivity and specificity of: (a) higher than 99% and 96% for NARX-TDNN; and (b) higher than 98% and 86% for HMM is observed. The proposed feature selection algorithm could potentially be applied to other model-based systems to improve the fault detection performance.  相似文献   

11.
《Computer Networks》2007,51(11):3172-3196
A search based heuristic for the optimisation of communication networks where traffic forecasts are uncertain and the problem is NP-complete is presented. While algorithms such as genetic algorithms (GA) and simulated annealing (SA) are often used for this class of problem, this work applies a combination of newer optimisation techniques specifically: fast local search (FLS) as an improved hill climbing method and guided local search (GLS) to allow escape from local minima. The GLS + FLS combination is compared with an optimised GA and SA approaches. It is found that in terms of implementation, the parameterisation of the GLS + FLS technique is significantly simpler than that for a GA and SA. Also, the self-regularisation feature of the GLS + FLS approach provides a distinctive advantage over the other techniques which require manual parameterisation. To compare numerical performance, the three techniques were tested over a number of network sets varying in size, number of switch circuit demands (network bandwidth demands) and levels of uncertainties on the switch circuit demands. The results show that the GLS + FLS outperforms the GA and SA techniques in terms of both solution quality and optimisation speed but even more importantly GLS + FLS has significantly reduced parameterisation time.  相似文献   

12.
In this paper, a new approach for multiyear expansion planning of distribution systems (MEPDS) is presented. The proposed MEPDS model optimally specifies the expansion schedule of distribution systems including reinforcement scheme of distribution feeders as well as sizing and location of distributed generations (DGs) during a certain planning horizon. Moreover, it can determine the optimal timing (i.e. year) of each investment/reinforcement. The objective function of the proposed MEPDS model minimizes the total investment, operation and emission costs while satisfying various technical and operational constraints. In order to solve the presented MEPDS model as a complicated multi-dimensional optimization problem, a new two-stage solution approach composed of binary modified imperialist competitive algorithm (BMICA) and Improved Shark Smell Optimization (ISSO), i.e. BMICA + ISSO, is presented. The performance of the suggested MEPDS model and also two-stage solution approach of BMICA + ISSO is verified by applying them on two distribution systems including a classic 34-bus and a real-world 94-bus distribution system as well as a well-known benchmark function. Additionally, the achieved results of BMICA + ISSO are compared with the obtained results of other two-stage solution methods.  相似文献   

13.
The scheduling problem in a multi-stage hybrid flowshop has been the subject of considerable research. All the studies on this subject assume that each job has to be processed on all the stages, i.e., there are no missing operations for a job at any stage. However, missing operations usually exist in many real-life production systems, such as a system in a stainless steel factory investigated in this note. The studied production system in the factory is composed of two stages in series. The first stage contains only one machine while the second stage consists of two identical machines (namely a 1 × 2 hybrid flowshop). In the system, some jobs have to be processed on both stages, but others need only to be processed on the second stage. Accordingly, the addressed scheduling problem is a 1 × 2 hybrid flowshop with missing operations at the first stage. In this note, we develop a heuristic for the problem to generate a non-permutation schedule (NPS) from a given permutation schedule, with the objective of minimizing the makespan. Computational results demonstrate that the heuristic can efficiently generate better NPS solutions.  相似文献   

14.
This paper studies some decision rules for ambulance scheduling. The scheduling decision rules embedded in the decision support systems for emergency ambulance scheduling consider the criteria on the average response time and the percentage of the ambulance requests that are responded within 15 min, which is usually ignored in traditional scheduling policies. The challenge in designing the decision rules lies in the stochastic and dynamic nature of request arrivals, fulfillment processes, and complex traffic conditions as well as the time-dependent spatial patterns of some parameters complicate the decisions in the problem. To illustrate the proposed decision rules’ usage in practice, a simulator is developed for performing some numerical experiments to validate the effectiveness and the efficiency of the proposed decision rules.  相似文献   

15.
This paper presents the design and the implementation of a Petri net (PN) model for the control of a flexible manufacturing system (FMS). A flexible automotive manufacturing system used in this environment enables quick cell configuration, and the efficient operation of cells. In this paper, we attempt to propose a flexible automotive manufacturing approach for modeling and analysis of shop floor scheduling problem of FMSs using high-level PNs. Since PNs have emerged as the principal performance modeling tools for FMS, this paper provides an object-oriented Petri nets (OOPNs) approach to performance modeling and to implement efficient production control. In this study, we modeled the system as a timed marked graph (TMG), a well-known subclass of PNs, and we showed that the problem of performance evaluation can be reduced to a simple linear programming (LP) problem with m  n + 1 variables and n constraints, where m and n represent the number of places and transitions in the marked graph, respectively. The presented PN based method is illustrated by modeling a real-time scheduling and control for flexible automotive manufacturing system (FAMS) in Valeo Turkey.  相似文献   

16.
This paper proposes an efficient approach to solve a cross-fab route planning problem for semiconductor wafer manufacturing. A semiconductor company usually adopts a dual-fab strategy. Two fab sites are built neighbor to each other to facilitate capacity-sharing. A product thus may be produced by a cross-fab route; that is, some operations of a product are manufactured in one fab and the other operations in the other fab. This leads to a cross-fab routing planning problem, which involves two decisions—determining the cut-off point of the cross-fab route and the route ratio for each product—in order to maximize the throughput subject a cycle time constraint. A prior study has proposed a method to solve the cross-fab route planning problem; yet it is computationally extensive in solving large scale cases. To alleviate this deficiency, we proposed three enhanced methods. Experiment results show that the best enhanced method could significantly reduce the computational efforts from about 13 h to 0.5 h, while obtaining a satisfactory solution.  相似文献   

17.
ESA's upcoming satellites Sentinel-2 (S2) and Sentinel-3 (S3) aim to ensure continuity for Landsat 5/7, SPOT-5, SPOT-Vegetation and Envisat MERIS observations by providing superspectral images of high spatial and temporal resolution. S2 and S3 will deliver near real-time operational products with a high accuracy for land monitoring. This unprecedented data availability leads to an urgent need for developing robust and accurate retrieval methods. Machine learning regression algorithms may be powerful candidates for the estimation of biophysical parameters from satellite reflectance measurements because of their ability to perform adaptive, nonlinear data fitting.By using data from the ESA-led field campaign SPARC (Barrax, Spain) we have compared the utility of four state-of-the-art machine learning regression algorithms and four different S2 and S3 band settings to assess three important biophysical parameters: leaf chlorophyll content (Chl), leaf area index (LAI) and fractional vegetation cover (FVC). The tested Sentinel configurations were: S2-10 m (4 bands), S2-20 m (8 bands), S2-60 m (10 bands) and S3-300 m (19 bands), and the tested methods were: neural networks (NN), support vector regression (SVR), kernel ridge regression (KRR), and Gaussian processes regression (GPR).GPR outperformed the other retrieval methods for the majority of tested configurations and was the only method that reached the 10% precision required by end users in the estimation of Chl. Also, although validated with an RMSE accuracy around 20%, GPR yielded optimal LAI and FVC estimates at highest S2 spatial resolution of 10 m with only four bands. In addition to high accuracy values, GPR also provided confidence intervals of the estimates and insight in relevant bands, which are key advantages over the other methods. Given all this, GPR proved to be a fast and accurate nonlinear retrieval algorithm that can be potentially implemented for operational monitoring applications.  相似文献   

18.
In biometric systems, reference facial images captured during enrollment are commonly secured using watermarking, where invisible watermark bits are embedded into these images. Evolutionary Computation (EC) is widely used to optimize embedding parameters in intelligent watermarking (IW) systems. Traditional IW methods represent all blocks of a cover image as candidate embedding solutions of EC algorithms, and suffer from premature convergence when dealing with high resolution grayscale facial images. For instance, the dimensionality of the optimization problem to process a 2048 × 1536 pixel grayscale facial image that embeds 1 bit per 8 × 8 pixel block involves 49k variables represented with 293k binary bits. Such Large-Scale Global Optimization problems cannot be decomposed into smaller independent ones because watermarking metrics are calculated for the entire image. In this paper, a Blockwise Coevolutionary Genetic Algorithm (BCGA) is proposed for high dimensional IW optimization of embedding parameters of high resolution images. BCGA is based on the cooperative coevolution between different candidate solutions at the block level, using a local Block Watermarking Metric (BWM). It is characterized by a novel elitism mechanism that is driven by local blockwise metrics, where the blocks with higher BWM values are selected to form higher global fitness candidate solutions. The crossover and mutation operators of BCGA are performed on block level. Experimental results on PUT face image database indicate a 17% improvement of fitness produced by BCGA compared to classical GA. Due to improved exploration capabilities, BCGA convergence is reached in fewer generations indicating an optimization speedup.  相似文献   

19.
This work starts from modeling the scheduling of n jobs on m machines/stages as flowshop with buffers in manufacturing. A mixed-integer linear programing model is presented, showing that buffers of size n ? 2 allow permuting sequences of jobs between stages. This model is addressed in the literature as non-permutation flowshop scheduling (NPFS) and is described in this article by a disjunctive graph (digraph) with the purpose of designing specialized heuristic and metaheuristics algorithms for the NPFS problem. Ant colony optimization (ACO) with the biologically inspired mechanisms of learned desirability and pheromone rule is shown to produce natively eligible schedules, as opposed to most metaheuristics approaches, which improve permutation solutions found by other heuristics. The proposed ACO has been critically compared and assessed by computation experiments over existing native approaches. Most makespan upper bounds of the established benchmark problems from Taillard (1993) and Demirkol, Mehta, and Uzsoy (1998) with up to 500 jobs on 20 machines have been improved by the proposed ACO.  相似文献   

20.
Open learning environments, such as Massive Open Online Courses (MOOCs), often lack adequate learner collaboration opportunities; they are also plagued by high levels of drop-out. Introducing project-based learning (PBL) can enhance learner collaboration and motivation, but PBL does not easily scale up into MOOCS. To support definition and staffing of projects, team formation principles and algorithms are introduced to form productive, creative, or learning teams. These use data on the project and on learner knowledge, personality and preferences. A study was carried out to validate the principles and the algorithms. Students (n = 168) and educational practitioners (n = 56) provided the data. The principles for learning teams and productive teams were accepted, while the principle for creative teams could not. The algorithms were validated using team classifying tasks and team ranking tasks. The practitioners classify and rank small productive, creative and learning teams in accordance with the algorithms, thereby validating the algorithms outcomes. When team size grows, for practitioners, forming teams quickly becomes complex, as demonstrated by the increased divergence in ranking and classifying accuracy. Discussion of the results, conclusions, and directions for future research are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号