首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The scheduling problem in a multi-stage hybrid flowshop has been the subject of considerable research. All the studies on this subject assume that each job has to be processed on all the stages, i.e., there are no missing operations for a job at any stage. However, missing operations usually exist in many real-life production systems, such as a system in a stainless steel factory investigated in this note. The studied production system in the factory is composed of two stages in series. The first stage contains only one machine while the second stage consists of two identical machines (namely a 1 × 2 hybrid flowshop). In the system, some jobs have to be processed on both stages, but others need only to be processed on the second stage. Accordingly, the addressed scheduling problem is a 1 × 2 hybrid flowshop with missing operations at the first stage. In this note, we develop a heuristic for the problem to generate a non-permutation schedule (NPS) from a given permutation schedule, with the objective of minimizing the makespan. Computational results demonstrate that the heuristic can efficiently generate better NPS solutions.  相似文献   

2.
This work starts from modeling the scheduling of n jobs on m machines/stages as flowshop with buffers in manufacturing. A mixed-integer linear programing model is presented, showing that buffers of size n ? 2 allow permuting sequences of jobs between stages. This model is addressed in the literature as non-permutation flowshop scheduling (NPFS) and is described in this article by a disjunctive graph (digraph) with the purpose of designing specialized heuristic and metaheuristics algorithms for the NPFS problem. Ant colony optimization (ACO) with the biologically inspired mechanisms of learned desirability and pheromone rule is shown to produce natively eligible schedules, as opposed to most metaheuristics approaches, which improve permutation solutions found by other heuristics. The proposed ACO has been critically compared and assessed by computation experiments over existing native approaches. Most makespan upper bounds of the established benchmark problems from Taillard (1993) and Demirkol, Mehta, and Uzsoy (1998) with up to 500 jobs on 20 machines have been improved by the proposed ACO.  相似文献   

3.
The m-machine permutation flowshop problem PFSP with the objectives of minimizing the makespan and the total flowtime is a common scheduling problem, which is known to be NP-complete in the strong sense, when m ? 3. This work proposes a new algorithm for solving the permutation FSP, namely combinatorial Particle Swarm Optimization. Furthermore, we incorporate in this heuristic an improvement procedure based on the simulated annealing approach. The proposed algorithm was applied to well-known benchmark problems and compared with several competing metaheuristics.  相似文献   

4.
The production of bakery goods is strictly time sensitive due to the complex biochemical processes during dough fermentation, which leads to special requirements for production planning and scheduling. Instead of mathematical methods scheduling is often completely based on the practical experience of the responsible employees in bakeries. This sometimes inconsiderate scheduling approach often leads to sub-optimal performance of companies. This paper presents the modeling of the production in bakeries as a kind of no-wait hybrid flow-shop following the definitions in Scheduling Theory, concerning the constraints and frame conditions given by the employed processes properties. Particle Swarm Optimization and Ant Colony Optimization, two widely used evolutionary algorithms for solving scheduling problems, were adapted and used to analyse and optimize the production planning of an example bakery. In combination with the created model both algorithms proved capable to provide optimized results for the scheduling operation within a predefined runtime of 15 min.  相似文献   

5.
This paper deals with a permutation flow-shop scheduling problem with finite intermediate storage (PFSFIS) between successive machines so as to minimize makespan. In such a problem the intermediate storage capacity constraints are considered besides the machine-related constraints usually discussed in the general permutation flow-shop. This feature adds extra difficulties to the scheduling problem. In this paper, we present some new block properties and a speed-up method using a forward-backward hybrid algorithm to compute makespan. Applied in a tabu search algorithm, the new block properties greatly reduce the neighborhood size and thus shorten the search time. Also, the speed-up method eliminates redundant computation for the objective function and reduces a majority of the running time. Computational experiments (up to 200 jobs and 20 machines) are given to demonstrate the effectiveness of new block neighborhood characteristics and the speed-up method. Compared with the results yielded by the best-known algorithm, the objective function is improved by 0.14% if the new block neighborhood characteristics are used; furthermore, the running time is reduced by 53.7% on the average if the speed-up method is used. Under the condition that both of the algorithms have the same running time the objective function is improved by 0.24% if both of the two above improvement methods are applied in the original tabu search.  相似文献   

6.
In this paper, we consider an ordinal on-line scheduling problem. A sequence of n independent jobs has to be assigned non-preemptively to two uniformly related machines. We study two objectives which are maximizing the minimum machine completion time, and minimizing the lp norm of the completion times. It is assumed that the values of the processing times of jobs are unknown at the time of assignment. However it is known in advance that the processing times of arriving jobs are sorted in a non-increasing order. We are asked to construct an assignment of all jobs to the machines at time zero, by utilizing only ordinal data rather than actual magnitudes of jobs. For the problem of maximizing the minimum completion time we first present a comprehensive lower bound on the competitive ratio, which is a piecewise function of machine speed ratio s. Then, we propose an algorithm which is optimal for any s  1. For minimizing the lp norm, we study the case of identical machines (s = 1) and present tight bounds as a function of p.  相似文献   

7.
A cobaloxime ([chlorobis(dimethylglyoximeato)(triphenylphosphine)] cobalt (III), [Co(dmgH)2pph3Cl]) incorporated in a plasticized poly(vinyl chloride) membrane was used to develop a perchlorate-selective electrode. The influence of membrane composition on the electrode response was studied. The electrode exhibits a Nernstian response over the perchlorate concentration range 1.0 × 10−6 to 1 × 10−1 mol l−1 with a slope of −56.8 ± 0.7 mV per decade of concentration, a detection limit of 8.3 × 10−7, a wide working pH range (3–10) and a fast response time (<15 s). The electrode shows excellent selectivity towards perchlorate with respect to many common anions. The electrode was used to determine perchlorate in water and human urine.  相似文献   

8.
Impaired water quality caused by human activity and the spread of invasive plant and animal species has been identified as a major factor of degradation of coastal ecosystems in the tropics. The main goal of this study was to evaluate the performance of AnnAGNPS (Annualized Non-Point Source Pollution Model), in simulating runoff and soil erosion in a 48 km2 watershed located on the Island of Kauai, Hawaii. The model was calibrated and validated using 2 years of observed stream flow and sediment load data. Alternative scenarios of spatial rainfall distribution and canopy interception were evaluated. Monthly runoff volumes predicted by AnnAGNPS compared well with the measured data (R2 = 0.90, P < 0.05); however, up to 60% difference between the actual and simulated runoff were observed during the driest months (May and July). Prediction of daily runoff was less accurate (R2 = 0.55, P < 0.05). Predicted and observed sediment yield on a daily basis was poorly correlated (R2 = 0.5, P < 0.05). For the events of small magnitude, the model generally overestimated sediment yield, while the opposite was true for larger events. Total monthly sediment yield varied within 50% of the observed values, except for May 2004. Among the input parameters the model was most sensitive to the values of ground residue cover and canopy cover. It was found that approximately one third of the watershed area had low sediment yield (0–1 t ha−1 y−1), and presented limited erosion threat. However, 5% of the area had sediment yields in excess of 5 t ha−1 y−1. Overall, the model performed reasonably well, and it can be used as a management tool on tropical watersheds to estimate and compare sediment loads, and identify “hot spots” on the landscape.  相似文献   

9.
We show that Graph Isomorphism is in the complexity class SPP, and hence it is in ⊕P (in fact, in ModkP for each k  2). These inclusions for Graph Isomorphism were not known prior to membership in SPP. We derive this result as a corollary of a more general result: we show that a generic problem FIND-GROUP has an FPSPP algorithm. This general result has other consequences: for example, it follows that the hidden subgroup problem for permutation groups, studied in the context of quantum algorithms, has an FPSPP algorithm. Also, some other algorithmic problems over permutation groups known to be at least as hard as Graph Isomorphism (e.g., coset intersection) are in SPP, and thus in ModkP for each k  2.  相似文献   

10.
At the Ejby Mølle WWTP in Odense Denmark a software sensor predicts the ammonium and nitrite + nitrate concentration in real-time based on ammonium and redox potential measurements. The predicted ammonium concentration is used to control the length of the nitrification phase in a Biodenipho® activated sludge unit because the software sensor has a shorter response time and a better up-time than the ammonium meter. The software sensor simplifies meter service and can reduce maintenance costs. The computed nitrite + nitrate concentration is an added benefit of the software sensor. On 4 different days, series of grab samples of the mixed liquor were collected in the aeration tanks. The average difference between the ammonium concentrations in the grab samples and the predicted ammonium concentration was 0.2 mgN L?1 and the average difference between the predicted and the measured nitrite + nitrate concentration was 0.3 mgN L?1. The agreement between the predicted and the measured ammonium concentration in the grab samples was better than the agreement between the ammonium meter and the grab samples. This was due to the shorter response time of the software sensor compared with the ammonium meter.  相似文献   

11.
《Parallel Computing》2014,40(5-6):144-158
One of the main difficulties using multi-point statistical (MPS) simulation based on annealing techniques or genetic algorithms concerns the excessive amount of time and memory that must be spent in order to achieve convergence. In this work we propose code optimizations and parallelization schemes over a genetic-based MPS code with the aim of speeding up the execution time. The code optimizations involve the reduction of cache misses in the array accesses, avoid branching instructions and increase the locality of the accessed data. The hybrid parallelization scheme involves a fine-grain parallelization of loops using a shared-memory programming model (OpenMP) and a coarse-grain distribution of load among several computational nodes using a distributed-memory programming model (MPI). Convergence, execution time and speed-up results are presented using 2D training images of sizes 100 × 100 × 1 and 1000 × 1000 × 1 on a distributed-shared memory supercomputing facility.  相似文献   

12.
A new optical sensor for mercury(II) ions is developed based on immobilization of 4-(2-pyridylazo)-resorcinol (PAR) on a triacetylcellulose membrane. Chemical binding of Hg2+ ions in solution with a PAR immobilized on the triacetylcellulose surface could be monitored spectrophotometrically at 525 nm. The optode shows excellent response over a wide concentration range of 5–3360 μM Hg(II) with a limit of detection of 1.5 μM Hg(II). The influence of factors responsible for the improved sensitivity of the sensor were studied and identified. The response time of the optode was 20 min for a stable solution, and was 15 min for a stirrer solution. The influence of potential interfering ions on the determination of 5 × 10−5 M Hg(II) was studied. The sensor was applied for determination of Hg(II) in water samples.  相似文献   

13.
The electrochemical sensor of triazole (TA) self-assembled monolayer (SAM) modified gold electrode (TA SAM/Au) was fabricated. The electrochemical behaviors of epinephrine (EP) at TA SAM/Au have been studied. The TA SAM/Au shows an excellent electrocatalytic activity for the oxidation of EP and accelerates electron transfer rate. The diffusion coefficient is 1.135 × 10−6 cm2 s−1. Under the optimum experiment conditions (i.e. 0.1 mol L−1, pH 4.4, sodium borate buffer, accumulation time: 180 s, accumulation potential: 0.6 V, scan rate: 0.1 Vs−1), the cathodic peak current of EP versus its concentration has a good linear relation in the ranges of 1.0 × 10−7 to 1.0 × 10−5 mol L−1 and 1.0 × 10−5 to 6.0 × 10−4 mol L−1 by square wave adsorptive stripping voltammetry (SWASV), with the correlation coefficient of 0.9985 and 0.9996, respectively. Detection limit is down to 1.0 × 10−8 mol L−1. The TA SAM/Au can be used for the determination of EP in practical injection. Meantime, the oxidative peak potentials of EP and ascorbic acid (AA) are well separated about 200 ± 10 mV at TA SAM/Au, the oxidation peak current increases approximately linearly with increasing concentration of both EP and AA in the concentration range of 2.0 × 10−5 to 1.6 × 10−4 mol L−1. It can be used for simultaneous determination of EP and AA.  相似文献   

14.
Tri-o-thymotide (I) has been used as an electroactive material in PVC (poly(vinyl chloride)) matrix for fabrication of chromium(III)-selective sensor. The membrane containing tri-o-thymotide, sodium tetraphenyl borate (NaTPB), dibutyl phthalate (DBP) and PVC in the optimum ratio 5:1:75:100 (w/w) exhibits a working concentration range of 4.0 × 10−6 to 1.0 × 10−1 M with a Nernstian slope of 20.0 ± 0.1 mV/decade of activity in the pH range of 2.8–5.1. The detection limit of this sensor is 2.0 × 10−7 M. The electrode exhibits a fast response time of 15 s, shows good selectivity towards Cr3+ over a number of mono-, bi- and trivalent cations and can also be used in partially non-aqueous medium (up to 15%, v/v) also. The assembly has been successfully used as an indicator electrode in the potentiometric titration of chromium(III) against EDTA and also to determine Cr(III) quantitatively in electroplating industry waste samples.  相似文献   

15.
The hypercube Qn is one of the most popular networks. In this paper, we first prove that the n-dimensional hypercube is 2n  5 conditional fault-bipancyclic. That is, an injured hypercube with up to 2n  5 faulty links has a cycle of length l for every even 4  l  2n when each node of the hypercube is incident with at least two healthy links. In addition, if a certain node is incident with less than two healthy links, we show that an injured hypercube contains cycles of all even lengths except hamiltonian cycles with up to 2n  3 faulty links. Furthermore, the above two results are optimal. In conclusion, we find cycles of all possible lengths in injured hypercubes with up to 2n  5 faulty links under all possible fault distributions.  相似文献   

16.
Though scheduling problems have been largely investigated by literature over the last 50 years, this topic still influences the research activity of many experts and practitioners, especially due to a series of studies which recently emphasized the closeness between theory and industrial practice. In this paper the scheduling problem of a hybrid flow shop with m stages, inspired to a truly observed micro-electronics manufacturing environment, has been investigated. Overlap between jobs of the same type, waiting time limit of jobs within inter-stage buffers as well as machine unavailability time intervals represent just a part of the constraints which characterize the problem here investigated. A mixed integer linear programming model of the problem in hand has been developed with the aim to validate the performance concerning the proposed optimization technique, based on a two-phase metaheuristics (MEs). In the first phase the proposed ME algorithm evolves similarly to a genetic algorithm equipped with a regular permutation encoding. Subsequently, since the permutation encoding is not able to investigate the overall space of solutions, a random search algorithm equipped with an m-stage permutation encoding is launched for improving the algorithm strength in terms of both exploration and exploitation. Extensive numerical studies on a benchmark of problems, along with a properly arranged ANOVA analysis, demonstrate the statistical outperformance of the proposed approach with respect to the traditional optimization approach based on a single encoding. Finally, a comprehensive comparative analysis involving the proposed algorithm and several metaheuristics developed by literature demonstrated the effectiveness of the dual encoding based approach for solving HFS scheduling problems.  相似文献   

17.
This paper gives an overview of the development of Silicon microphones fabricated in a standard BiCMOS process line of Infineon. MEMS development results in reliable processes for high sensitivity poly-silicon membranes. Microphones with sensitivity up to −39 dB V/Pa at 2 V bias and a signal to noise ratio of up to 65 dB(A) are presented. The impact of packaging on the product design is described. As an example a directional microphone with cardioid response and backward noise suppression of 19 dB is described.  相似文献   

18.
The Job-Shop Scheduling Problem (JSSP) is well known for its complexity as an NP-hard disjunctive scheduling problem. The problem addressed in this paper is JSSPs with an objective of minimizing makespan while satisfying a number of hard constraints. An efficient GRASP × ELS approach is introduced for solving this problem. The efficiency is evaluated using the widely known 40 Laurence’s instances which encompass medium and large scale instances. The computational results prove that the proposed method competes with the best published methods in both quality of results and computational time. Recently, Web services have generated great interest in researchers. Such application architecture is based on the client–server model using existing Internet protocols and open standards. It provides new approaches to optimization methods. The proposed GRASP × ELS is packaged into a Web Service (WS), i.e., it offers for the research community an open access to our optimization approach. Moreover, the proposed web service can be even included in research future works with a very small programming effort.To favor utilization of the web service and to prove the facility in which the service could be used, we provide an example in Java proving that it is possible to obtain in less than 10 min a client application using the different methods exposed by this web service. Such usage extends to classical library inclusion in program with the difference that a method is called in the client side and represents an execution on the server.The Web Service paradigm is a new approach in spreading algorithms and therefore this paper stands at the crossroads of optimization research community and the web service community expectations. The GRASP × ELS provided in the web service, is a state of the art method which competes with previously published ones and which has the advantage of being available for free, in any languages, everywhere contributing in spreading operational research contribution.  相似文献   

19.
《Computers & Fluids》2006,35(8-9):863-871
Following the work of Lallemand and Luo [Lallemand P, Luo L-S. Theory of the lattice Boltzmann method: acoustic and thermal properties in two and three dimensions. Phys Rev E 2003;68:036706] we validate, apply and extend the hybrid thermal lattice Boltzmann scheme (HTLBE) by a large-eddy approach to simulate turbulent convective flows. For the mass and momentum equations, a multiple-relaxation-time LBE scheme is used while the heat equation is solved numerically by a finite difference scheme. We extend the hybrid model by a Smagorinsky subgrid scale model for both the fluid flow and the heat flux. Validation studies are presented for laminar and turbulent natural convection in a cavity at various Rayleigh numbers up to 5 × 1010 for Pr = 0.71 using a serial code in 2D and a parallel code in 3D, respectively. Correlations of the Nusselt number are discussed and compared to benchmark data. As an application we simulated forced convection in a building with inner courtyard at Re = 50 000.  相似文献   

20.
Light use efficiency (LUE) is an important variable characterizing plant eco-physiological functions and refers to the efficiency at which absorbed solar radiation is converted into photosynthates. The estimation of LUE at regional to global scales would be a significant advantage for global carbon cycle research. Traditional methods for canopy level LUE determination require meteorological inputs which cannot be easily obtained by remote sensing. Here we propose a new algorithm that incorporates the enhanced vegetation index (EVI) and a modified form of land surface temperature (Tm) for the estimation of monthly forest LUE based on Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. Results demonstrate that a model based on EVI × Tm parameterized from ten forest sites can provide reasonable estimates of monthly LUE for temperate and boreal forest ecosystems in North America with an R2 of 0.51 (p < 0.001) for the overall dataset. The regression coefficients (a, b) of the LUE–EVI × Tm correlation for these ten sites have been found to be closely correlated with the average EVI (EVI_ave, R2 = 0.68, p = 0.003) and the minimum land surface temperature (LST_min, R2 = 0.81, p = 0.009), providing a possible approach for model calibration. The calibrated model shows comparably good estimates of LUE for another ten independent forest ecosystems with an overall root mean square error (RMSE) of 0.055 g C per mol photosynthetically active radiation. These results are especially important for the evergreen species due to their limited variability in canopy greenness. The usefulness of this new LUE algorithm is further validated for the estimation of gross primary production (GPP) at these sites with an RMSE of 37.6 g C m? 2 month? 1 for all observations, which reflects a 28% improvement over the standard MODIS GPP products. These analyses should be helpful in the further development of ecosystem remote sensing methods and improving our understanding of the responses of various ecosystems to climate change.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号