首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposed two robust scheduling formulations in real manufacturing systems based on the concept of bad scenario set to hedge against processing time uncertainty, which is described by discrete scenarios. Two proposed robust scheduling formulations are applied to an uncertain job-shop scheduling problem with the makespan as the performance criterion. The united-scenario neighbourhood (UN) structure is constructed based on bad scenario set for the scenario job-shop scheduling problem. A tabu search (TS) algorithm with the UN structure is developed to solve the proposed robust scheduling problem. An extensive experiment was conducted. The computational results show that the first robust scheduling formulation could be preferred to the second one for the discussed problem. It is also verified that the obtained robust solutions could hedge against the processing time uncertainty through decreasing the number of bad scenarios and the degree of performance degradation on bad scenarios. Moreover, the computational results demonstrate that the developed TS algorithm is competitive for the proposed robust scheduling formulations.  相似文献   

2.
Estimating reliability of components in series and parallel systems from masking system testing data has been studied. In this paper we take into account a second type of uncertainty: censored lifetime, when system components have constant failure rates. To efficiently estimate failure rates of system components in presence of combined uncertainty, we propose a useful concept for components: equivalent failure and equivalent lifetime. For a component in a system with known status and lifetime, its equivalent failure is defined as its conditional failure probability and its equivalent lifetime is its expectation of lifetime. For various uncertainty scenarios, we derive equivalent failures and test times for individual components in both series and parallel systems. An efficient EM algorithm is formulated to estimate component failure rates. Two numerical examples are presented to illustrate the application of the algorithm.  相似文献   

3.
Parallel machine scheduling problems are commonly encountered in a wide variety of manufacturing environments and have been extensively studied. This paper addresses a makespan minimisation scheduling problem on identical parallel machines, in which the specific processing time of each job is uncertain, and its probability distribution is unknown because of limited information. In this case, the deterministic or stochastic scheduling model may be unsuitable. We propose a robust (min–max regret) scheduling model for identifying a robust schedule with minimal maximal deviation from the corresponding optimal schedule across all possible job-processing times (called scenarios). These scenarios are specified as closed intervals. To solve the robust scheduling problem, which is NP-hard, we first prove that a regret-maximising scenario for any schedule belongs to a finite set of extreme point scenarios. We then derive two exact algorithms to optimise this problem using a general iterative relaxation procedure. Moreover, a good initial solution (optimal schedule under a mid-point scenario) for the aforementioned algorithms is discussed. Several heuristics are developed to solve large-scale problems. Finally, computational experiments are conducted to evaluate the performance of the proposed methods.  相似文献   

4.
Motivated by the challenges encountered in sawmill production planning, we study a multi-product, multi-period production planning problem with uncertainty in the quality of raw materials and consequently in processes yields, as well as uncertainty in products demands. As the demand and yield own different uncertain natures, they are modelled separately and then integrated. Demand uncertainty is considered as a dynamic stochastic data process during the planning horizon, which is modelled as a scenario tree. Each stage in the demand scenario tree corresponds to a cluster of time periods, for which the demand has a stationary behaviour. The uncertain yield is modelled as scenarios with stationary probability distributions during the planning horizon. Yield scenarios are then integrated in each node of the demand scenario tree, constituting a hybrid scenario tree. Based on the hybrid scenario tree for the uncertain yield and demand, a multi-stage stochastic programming (MSP) model is proposed which is full recourse for demand scenarios and simple recourse for yield scenarios. We conduct a case study with respect to a realistic scale sawmill. Numerical results indicate that the solution to the multi-stage stochastic model is far superior to the optimal solution to the mean-value deterministic and the two-stage stochastic models.  相似文献   

5.
Ran Cao  Wei Hou  Yanying Gao 《工程优选》2018,50(9):1453-1469
This article presents a three-stage approach for solving multi-objective system reliability optimization problems considering uncertainty. The reliability of each component is considered in the formulation as a component reliability estimate in the form of an interval value and discrete values. Component reliability may vary owing to variations in the usage scenarios. Uncertainty is described by defining a set of usage scenarios. To address this problem, an entropy-based approach to the redundancy allocation problem is proposed in this study to identify the deterministic reliability of each component. In the second stage, a multi-objective evolutionary algorithm (MOEA) is applied to produce a Pareto-optimal solution set. A hybrid algorithm based on k-means and silhouettes is performed to select representative solutions in the third stage. Finally, a numerical example is presented to illustrate the performance of the proposed approach.  相似文献   

6.
This paper presents a multistage stochastic programming model for strategic capacity planning at a major US semiconductor manufacturer. Main sources of uncertainty in this multi-year planning problem include demand of different technologies and capacity estimations for each fabrication (fab) facility. We test the model using real-world scenarios requiring the determination of capacity planning for 29 technology categories among five fab facilities. The objective of the model is to minimize the gaps between product demands and the capacity allocated to the technology specified by each product. We consider two different scenario-analysis constructs: first, an independent scenario structure where we assume no prior information and the model systematically enumerates possible states in each period. The states from one period to another are independent from each other. Second, we consider an arbitrary scenario construct, which allows the planner to sample/evaluate arbitrary multi-period scenarios that captures the dependency between periods. In both cases, a scenario is defined as a multi-period path from the root to a leaf in the scenario tree. We conduct intensive computational experiments on these models using real data supplied by the semiconductor manufacturer. The purpose of our experiments is two-fold: first to examine different degree of scenario aggregation and its effects on the independent model to achieve high-quality solution. Using this as a benchmark, we then compare the results from the arbitrary model and illustrate the different uses of the two scenario constructs. We show that the independent model allows a varying degree of scenario aggregation without significant prior information, while the arbitrary model allows planners to play out specific scenarios given prior information.  相似文献   

7.
Maximum-likelihood estimators are used in algorithms for measuring concentrations in gas analyzers under conditions where there is a priori uncertainty over the parameters of the distribution shown by the random process. Optimal and quasioptimal algorithms are proposed, with the latter simpler to implement. An estimate is made of how much the measurement performance is improved by using those algorithms.  相似文献   

8.
使用一组情景来描述需求的不确定性,要求在所有情景下最大的网络扩张成本最小.建立了情景规划模型,提出分解算法,首先求出每种情景下需要扩张的边及其扩张的容量,然后对所有需要扩张的边取其并集,需要扩张的边的容量取其最大,最后求出最小的扩张成本.计算结果表明分解算法能够大大提高求解速度.  相似文献   

9.
Cost of software testing can be reduced by automated test data generation to find a minimal set of data that has maximum coverage. Search-based software testing (SBST) is one of the techniques recently used for automated testing task. SBST makes use of control flow graph (CFG) and meta-heuristic search algorithms to accomplish the process. This paper focuses on test data generation for branch coverage. A major drawback in using meta-heuristic techniques is that the CFG paths have to be traversed from the starting node to end node for each automated test data. This kind of traversal could be improved by branch ordering, together with elitism. But still the population size and the number of iterations are maintained as the same to keep all the branches alive. In this paper, we present an incremental genetic algorithm (IGA) for branch coverage testing. Initially, a classical genetic algorithm (GA) is used to construct the population with the best parents for each branch node, and the IGA is started with these parents as the initial population. Hence, it is not necessary to maintain a huge population size and large number of iterations to cover all the branches. The performance is analyzed with five benchmark programs studied from the literature. The experimental results indicate that the proposed IGA search technique outperforms the other meta-heuristic search techniques in terms of memory usage and scalability.  相似文献   

10.
7 This paper elucidates the computation of optimal controls for steel annealing processes as hybrid systems which comprise of one or more furnaces integrated with plant-wide planning and scheduling operations. A class of hybrid system is considered to capture the trade-off between metallurgical quality requirement and timely product delivery. Various optimization algorithms including particle swarm optimization algorithm (PSO) with time varying inertia weight methods, PSO with globally and locally tuned parameters (GLBest PSO), parameter free PSO (pf-PSO) and PSO like algorithm via extrapolation (ePSO), real coded genetic algorithm (RCGA) and two-phase hybrid real coded genetic algorithm (HRCGA) are considered to solve the optimal control problems for the steel annealing processes (SAP). The optimal solutions including optimal line speed, optimal cost, and job completion time and convergence rate obtained through all these optimization algorithms are compared with each other and also those obtained via the existing method, forward algorithm (FA). Various statistical analyses and analysis of variance (ANOVA) test and hypothesis t-test are carried out in order to compare the performance of each method in solving the optimal control problems of SAP. The comparative study of the performance of the various algorithms indicates that the PSO like algorithms, pf-PSO and ePSO are equally good and are also better than all the other optimization methods considered in this chapter.  相似文献   

11.
Development of real time in situ monitoring and control of thin film depositions using ellipsometry requires both data acquisition and processing to be rapid. Present speeds of measurement and computation of basic parameters, Ψ and Δ, are sufficient for data acquisition which is essentially real time. However, computation of film parameters, such as thickness and optical properties, generally cannot keep up with the incoming data and must be performed in a batch mode after the deposition.

This work describes the development of enhanced, high speed data reduction algorithms using artificial neural networks (ANN). The networks are trained using computed data and subsequently give values of film parameters in the millisecond time regime. The ANN outputs are used as initial estimates in a variably damped least squares algorithm for accuracy improvement. The combination of these two algorithms provides very accurate solutions in 75 ms per point on a DEC VAX 8800 multiprocessor system running at a combined 12 Mips. This speed is suitable for real time film monitoring and control for growth rates up to 10 nm per second. Results for fixed angle of incidence, single wavelength, in situ data for Ni deposited on BK7 substrates are presented.  相似文献   


12.
The National Physical Laboratory, India, has established an ultrasonic interferometer manometer (UIM) as a primary standard in the barometric pressure region. This paper reports the results of the measurement uncertainty of the UIM evaluated under experimental conditions at each pressure point through a software developed and integrated with the existing operating software program. Through the operating software program, an initial estimate of column length using time-of-flight and four predefined ultrasonic frequencies with the help of an exact fractions algorithm is made, but the final pressure is calculated from the measurements made at multiple frequencies. At a particular pressure point, 44 measurements are made at different frequencies, covering two full circles (fringes). The mean of these 44 measurements is taken as the measured pressure to minimize the uncertainty contribution due to systematic phase error. In the present uncertainty evaluation, the standard deviation of this mean is taken into account in estimating the measurement uncertainty in real experimental conditions. The uncertainty thus estimated has been compared with that uncertainty, evaluated theoretically, as per ISO Guide to the Expression of Uncertainty in Measurements. This experimental determination of measurement uncertainty has helped us in making further improvements in the measurement accuracy of the UIM, especially at low-pressure measurements.  相似文献   

13.
We have adopted the state-vector fusion technique for fusing multiple sensors track data to provide complete and precise trajectory information about the flight vehicle under test, for the purpose of flight safety monitoring and decisionmaking at Test Range. The present paper brings out the performance of the algorithm for different process noise and measurement noise using simulated as well as real track data  相似文献   

14.
This paper presents a methodology to evaluate the position availability of automotive grade global positioning system (GPS) receivers intended for Telematics applications utilizing a multichannel GPS satellite signal simulator in a controlled laboratory environment. Initially, field testing of two distinct GPS receivers was conducted in an urban canyon environment and a foliage environment to assess each receiver's position availability performance. Test scenarios were then developed on a multichannel GPS satellite signal simulator in order to create controlled and repeatable stimuli to the GPS receivers. The scenarios take into account the actual satellite constellations at the same day, time, and locations of the field data collections. Furthermore, the number of visible satellites and power levels was adjusted in order to stimulate the hardware tracking sensitivity, hardware acquisition sensitivity, dynamic range, and navigation filter design, all of which impact position availability for GPS receivers. Quantitative results demonstrated good correlation between the results obtained using the developed test scenarios and the results from the field testing. The proposed methodology will result in reducing validation cost and time to market for automotive Telematics products  相似文献   

15.
The scenario in a risk analysis can be defined as the propagating feature of specific initiating event which can go to a wide range of undesirable consequences. If we take various scenarios into consideration, the risk analysis becomes more complex than do without them. A lot of risk analyses have been performed to actually estimate a risk profile under both uncertain future states of hazard sources and undesirable scenarios. Unfortunately, in case of considering specific systems such as a radioactive waste disposal facility, since the behaviour of future scenarios is hardly predicted without special reasoning process, we cannot estimate their risk only with a traditional risk analysis methodology. Moreover, we believe that the sources of uncertainty at future states can be reduced pertinently by setting up dependency relationships interrelating geological, hydrological, and ecological aspects of the site with all the scenarios. It is then required current methodology of uncertainty analysis of the waste disposal facility be revisited under this belief.In order to consider the effects predicting from an evolution of environmental conditions of waste disposal facilities, this paper proposes a quantitative assessment framework integrating the inference process of Bayesian network to the traditional probabilistic risk analysis. We developed and verified an approximate probabilistic inference program for the specific Bayesian network using a bounded-variance likelihood weighting algorithm. Ultimately, specific models, including a model for uncertainty propagation of relevant parameters were developed with a comparison of variable-specific effects due to the occurrence of diverse altered evolution scenarios (AESs). After providing supporting information to get a variety of quantitative expectations about the dependency relationship between domain variables and AESs, we could connect the results of probabilistic inference from the Bayesian network with the consequence evaluation model addressed. We got a number of practical results to improve current knowledge base for the prioritization of future risk-dominant variables in an actual site.  相似文献   

16.
In this article, an efficient and novel approach for video data association is developed. This new method is formulated as a search across the hypotheses space defined by the possible association among tracks and detections, carried out for each frame of a video sequence. The full data association problem in visual tracking is formulated as a combinatorial hypotheses search with a heuristic evaluation function taking into account structural and specific information such as distance, shape, color, etc. To guarantee real‐time performance, a time limit is set for the search process explore alternative solutions. This time limit defines the upper bound of the number of evaluations depending on search algorithm efficiency. Estimation distribution algorithms are proposed as an efficient evolutionary computation technique to search in this hypothesis space. Finally, an exhaustive comparison of the performance of alternative algorithms is carried out considering complex representative situations in real video sets. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 208–220, 2009  相似文献   

17.
A wavelet-based multisensor data fusion algorithm   总被引:7,自引:0,他引:7  
This paper presents a wavelet transform-based data fusion algorithm for multisensor systems. With this algorithm, the optimum estimate of a measurand can be obtained in terms of minimum mean square error (MMSE). The variance of the optimum estimate is not only smaller than that of each observation sequence but also smaller than the arithmetic average estimate. To implement this algorithm, the variance of each observation sequence is estimated using the wavelet transform, and the optimum weighting factor to each observation is obtained accordingly. Since the variance of each observation sequence is estimated only from its most recent data of a predetermined length, the algorithm is self-adaptive. This algorithm is applicable to both static and dynamic systems including time-invariant and time-varying processes. The effectiveness of the algorithm is demonstrated using a piecewise-smooth signal and an actual time-varying flow signal.  相似文献   

18.
Most shops currently maintain a single process plan for each part type manufactured by the shop even when multiple process plans are feasible to produce the part. The use of a static process plan for a part regardless of the product mix and volume robs the shop of production flexibility and efficiency. In this paper, given that multiple process plans for a part exist, a method to determine the best process plan to implement for a part type in a given production scenario defined by a known product mix and volume is addressed. The method selects a set of process plans to implement based on minimizing total material handling and machining time for the part mix and volume. The problem is modelled mathematically and solved using a heuristic algorithm. Experimental results to describe the performance of the algorithm are presented for different production scenarios, problem sizes, and solution strategies.  相似文献   

19.
We focus on an inverse problem for identifying physical parameters such as Young's modulus and air and structural damping coefficients in the mathematical model of cantilevered beams subject to random disturbance, using dynamic noisy data measured on its vibration taken in a nondestructive manner. First, we describe mathematical models of the cantilevered beam by an Euler-Bernoulli type partial differential equation including parameters to be identified and the measurement equation, taking vibration data including the observation noise. Second, the identification problem using random dynamic data is divided into an estimation problem obtaining the (modal) state estimate and a least-squares problem determining unknown parameters, and then the unknown parameters are determined recursively by using the pair of algorithms alternately. Finally, in order to verify the efficacy of the proposed identification algorithm, simulation studies and experiments are shown.  相似文献   

20.
The performance of genome-wide gene regulatory network inference algorithms depends on the sample size. It is generally considered that the larger the sample size, the better the gene network inference performance. Nevertheless, there is not adequate information on determining the sample size for optimal performance. In this study, the author systematically demonstrates the effect of sample size on information-theory-based gene network inference algorithms with an ensemble approach. The empirical results showed that the inference performances of the considered algorithms tend to converge after a particular sample size region. As a specific example, the sample size region around ?64 is sufficient to obtain the most of the inference performance with respect to precision using the representative algorithm C3NET on the synthetic steady-state data sets of Escherichia coli and also time-series data set of a homo sapiens subnetworks. The author verified the convergence result on a large, real data set of E. coli as well. The results give evidence to biologists to better design experiments to infer gene networks. Further, the effect of cutoff on inference performances over various sample sizes is considered. [Includes supplementary material].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号