共查询到20条相似文献,搜索用时 31 毫秒
1.
Dean C. Chatfield 《国际生产研究杂志》2013,51(4):935-950
Lumpy demand is a phenomenon encountered in manufacturing or retailing when the items are slow-moving or too expensive, for example fighter plane engines. So far, the seminal procedure of Croston's (1972), with or without modifications, has been the preference for forecasting lumpy demand. Nevertheless, Croston (1974) and others, such as Venkitachalam et al . (2002), have suggested the use of zero forecasts when the demand contains many zeros. In this paper, we put to the test this idea by doing a full factorial study comparing five forecasting methods, including all-zero, under several levels of demand lumpiness, demand variation and ordering, holding and shortage cost. We evaluate the forecasting methods by three measures of forecast error and two measures of inventory cost. We find that all-zero forecasts yield the lowest cost when lumpiness is high; is it also best for mid-lumpiness, if the shortage cost is much higher than the holding cost. We also find that the lowest forecasting error does not necessarily lead to the lowest system cost. And contrary to the assertions in Chen et al . (2000b) and Dejonckheere et al . (2003, 2004), our factorial experiment reinforces the intuition that simple exponential smoothing is superior to an equivalent moving average. 相似文献
2.
K. N. F. Leung 《工程优选》2013,45(5):621-625
An error appearing in equation (3) of Y.L. Zhang (J. Appl. Prob., 1994, 31, 1123–1127) has been pointed out by S.H. Sheu (Eur. J. Oper. Res., 1999, 112, 503–516) and the correct expressions (25)–(27) given accordingly on pp. 510–511. However, the derivation of the key expression (27), the long-run expected loss rate, was not presented. The purpose of this note is threefold. First, since a monotone process (e.g. an arithmetic, geometric, or arithmetic–geometric process) approach, as discussed by K.N.F. Leung (Eng. Optimiz., 2001, 33, 473–484), is considered to be relevant, realistic, and appropriate to the modelling of a deteriorating system maintenance problem, it is worth explicitly developing this expression, which is of benefit to the subsequent studies. Secondly, equation (3) in Zhang (1994) is shown to be fundamentally correct and so it can be viewed as an alternative method of formulating similar bivariate cases. Thirdly, although equations (4) and (5) in Zhang (1994) have been logically and correctly derived, both can be readily reduced to their simplest forms which are derived here. 相似文献
3.
Kit-Nam Francis Leung 《国际生产研究杂志》2013,51(1):66-71
The main purpose of this corrigendum is to indicate and rectify the same mistakes made by Schrady (1967), Nahmias and Rivera (1979), and Teunter (2004) in the course of solving their respective models in order that subsequent researchers will not follow the same. To this end, we derive the corresponding correct global-optimal formulae for the substitution-policy model (1,?n), with infinite or finite recovery (or called repair) rate, using differential calculus, as well as providing a closed-form expression to identify the optimal positive integral value of n recovery set-ups. In addition, we also rectify the formulae and solution procedure for numerically solving the constrained non-linear programme. 相似文献
4.
P. R. McMullen 《国际生产研究杂志》2013,51(12):2465-2478
A strategy is presented to obtain production sequences resulting in minimal tooling replacements. An objective function is employed to distribute the tool wear as evenly as possible throughout the sequence. This objective function is an extension of Miltenburg's earlier work (1989) concerned with obtaining production sequences while evenly distributing the satisfaction of demand. Smaller problems are solved to optimality, while larger problems are solved as close as possible to optimality. The production sequences are simulated to estimate required tooling replacements. The methodology presented here consistently results in fewer tooling replacements when compared with earlier published work (McMullen et al. 2002, McMullen 2003). 相似文献
5.
The effects of changing a unit time length of a planning horizon from a month to a week on the optimum planning horizon were examined by calculating the optimum planning horizon through the methods proposed by Nagasawa, Nishiyama, and Hitomi (1982). It was found that the optimum planning horizon decreased by 20-30% in calendar time when the unit time length was changed from a month (monthly scheduling) to a week (weekly scheduling). However, the degree of this decrease was much smaller than the (65% shown by Bernardo (1978), and it followed that the optimum planning horizon largely increased in the number of periods with this change of the unit time length. It was also clarified that the large amount of the decrease shown by Bernardo was derived on the basis of the erroneous analysis of the relation between cost coefficients and the unit time length. Consequently, weekly scheduling was not always preferred to monthly scheduling 相似文献
6.
In this paper, the problem of minimising maximum completion time on a single batch processing machine is studied. A batch processing is performed on a machine which can simultaneously process several jobs as a batch. The processing time of a batch is determined by the longest processing time of jobs in the batch. The batch processing machine problem is encountered in many manufacturing systems such as burn-in operations in the semiconductor industry and heat treatment operations in the metalworking industries. Heuristics are developed by iterative decomposition of a mixed integer programming model, modified from the successive knapsack problem by Ghazvini and Dupont (1998, Minimising mean flow times criteria on a single batch processing machine with non-identical jobs sizes. International Journal of Production Economics 55: 273–280) and the waste of batch clustering algorithm by Chen, Du, and Huang (2011, Scheduling a batch processing machine with non-identical job sizes: a clustering perspective. International Journal of Production Research 49 (19): 5755–5778). Experimental results show that the suggested heuristics produce high-quality solutions comparable to those of previous heuristics in a reasonable computation time. 相似文献
7.
Zeqiang Zhang 《国际生产研究杂志》2013,51(15):4220-4223
This paper presents a corrected formulation to the mixed integer programming model of the double-row layout problem (DRLP), first proposed by Chung and Tanchoco (2010, The double row layout problem. International Journal of Production Research, 48 (3), 709–727). In the DRLP, machines are placed along two rows of a corridor, where the objective is to minimise the total cost of material handling for products that move between these machines. We highlight the errors in the original formulation, propose corrections to the formulation, and provide an analytical validation of the corrections. 相似文献
8.
Ashraf W. Labib 《国际生产研究杂志》2013,51(21):6287-6299
This paper aims to compare two tools for decision makers that intend to support the decision of the selection of the appropriate supplier. Suppliers are crucial to both the efficiency and effectiveness of the performance of companies. A critical success factor of these companies is the selection of the appropriate supplier. A methodology is proposed to optimise the evaluation process based on different criteria. The proposed approach extends the one proposed by Ordoobadi (2009, Development of a supplier selection model using fuzzy logic. Supply Chain Management: An International Journal, 14 (4), 314–327) who proposed the application of fuzzy logic (FL) where we use the same example case study in order to compare the analytic hierarch process (AHP) with FL. In this paper we demonstrate how we can achieve the same objective of expressing human assessments in the form of linguistic expressions by using AHP. Moreover, we demonstrate the capability to run a sensitivity analysis which helps to understand the causal relationships among the different factors. We demonstrate how this capability can help us to explain and predict the different relationships among criteria and alternatives. Moreover, we provide a measure that is able to capture the consistency of the decision maker's preferences. In our approach we provide a single unit of scale that is not only capable of ranking suppliers but also provides an understanding of the difference in scale between different suppliers which can then help to allocate resources accordingly. These facilities are not offered by Ordoobadi (2009). The proposed approach here can help companies to identify the best supplier in changing environments. The paper describes a decision model that incorporates a decision maker's subjective assessments and applies a multiple criteria decision making technique to manipulate and quantify these assessments. Unlike many similar studies, two techniques have been performed on the same case study in order to improve our understanding of the differences in the proposed techniques. 相似文献
9.
As Pfeffer (1993) states that until agreement is reached on a subject, progress may be slow. This paper converges the discussions on social capital in the operations management literature by way of a systematic literature review of 3- and 4-star journals. Human resource management, voluntary work and entrepreneurship were identified as minor themes within the review and thus potentially underexplored areas. Quality management, project management and new product development show significant use of social capital and particularly the role of social capital in the intrafirm environment. Finally, supply chain management shows the most significant use of social capital, particularly in explaining the characteristics of buyer–supplier relationships and how these impact inter-firm performance. Areas of future research are presented that draw on all forms of social capital to explore how they may be affect by contextual factors. The paper concludes by proposing a conceptual model of social capital for use within operations management. 相似文献
10.
Fatigue-induced damage is often progressive and gradual in nature. Fatigue is often deteriorated by corrosion in ageing structures, creating maintenance problems, and even causing catastrophic failure. This ushers the development of structural health monitoring (SHM) and nondestructive evaluation (NDE) systems. Recent advent of smart materials applicable in SHM alleviates the shortcomings of the conventional techniques. Autonomous, real-time, remote monitoring becomes possible with the use of smart piezoelectric transducers. For instance, the electro-mechanical impedance (EMI) technique, employing piezoelectric transducers as collocated actuators and sensors, is known for its ability in damage detection and characterization. This article presents a series of lab-scale experimental tests and analysis to investigate the feasibility of fatigue crack detection and characterization employing the EMI technique. This study extends the work by Lim and Soh [1] to incorporate the phases involving crack initiation and critical crack. It is suggested that the EMI technique is effective in characterizing fatigue induced cracking, even in its incipient stage. Micro-crack invisible to the naked eyes can be detected by the technique especially when employing the higher frequency range of 100–200 kHz. A quick and handy qualitative-based critical crack identification method is also suggested by visually inspecting the admittance frequency spectrum. 相似文献
11.
László Daróczy 《工程优选》2013,45(5):689-705
A new algorithm is proposed for topology optimization based on a fluid dynamics analogy. It possesses characteristics similar to most well-known methods, such as the Evolutionary Structural Optimization (ESO)/Bidirectional Evolutionary Structural Optimization (BESO) method due to Xie and Steven (1993, “A Simple Evolutionary Procedure for Structural Optimisation.” Computers and Structures 49 (5): 885–896.), which works with discrete values, and the Solid Isotropic Material with Penalization (SIMP) method due to Bendsøe (1989, “Optimal Shape Design as aMaterial Distribution Problem.” Structural Optimization 1 (4): 193–202.) and Zhou and Rozvany (1991, “The COCAlgorithm–Part II: Topological, Geometry and Generalized Shape Optimization.” Computer Methods in Applied Mechanics and Engineering 89 (1–3): 309–336.) (using Optimality Criterion (OC) or Method of Moving Asymptotes (MMA)), which works with intermediate values, as it is able to work both with discrete and intermediate densities, but always yields a solution with discrete densities. It can be proven mathematically that the new method is a generalization of the BESO method and using appropriate parameters it will operate exactly as the BESO method. The new method is less sensitive to rounding errors of the matrix solver as compared to the BESO method and is able to give alternative topologies to well-known problems. The article presents the basic idea and the optimization algorithm, and compares the results of three cantilever optimizations to the results of the SIMP and BESO methods. 相似文献
12.
In this paper, a simulated annealing approach is developed for the parallel mixed-model assembly line balancing and model sequencing (PMMAL/BS) problem which is an extension of the parallel assembly line balancing (PALB) problem introduced by Gökçen et al. (2006). In PALB, the aim is to balance more than one assembly line together. Balancing of the lines simultaneously with a common resource is very important in terms of resource minimisation. The proposed approach maximises the line efficiency and distributes the workloads smoothly across stations. The proposed approach is illustrated with two numerical examples and its performance is tested on a set of test problems. The computational results show that the proposed approach is very effective for PMMAL/BS. 相似文献
13.
This paper considers a two-stage assembly flow shop problem where m parallel machines are in the first stage and an assembly machine is in the second stage. The objective is to minimise a weighted sum of makespan and mean completion time for n available jobs. As this problem is proven to be NP-hard, therefore, we employed an imperialist competitive algorithm (ICA) as solution approach. In the past literature, Torabzadeh and Zandieh (2010) showed that cloud theory-based simulated annealing algorithm (CSA) is an appropriate meta-heuristic to solve the problem. Thus, to justify the claim for ICA capability, we compare our proposed ICA with the reported CSA. A new parameters tuning tool, neural network, for ICA is also introduced. The computational results clarify that ICA performs better than CSA in quality of solutions. 相似文献
14.
Process yield is an important criterion used in the manufacturing industry for measuring process performance. Methods for measuring yield for processes with single characteristic have been investigated extensively. However, methods for measuring yield for processes with multiple characteristics have been comparatively neglected. In this paper, we develop a generalized yield index, called TS pk,PC , based on the index Spk introduced by Boyles (Journal of Quality Technology, 23, 17–26, 1991) using the principal component analysis (PCA) technique. We obtained a lower confidence bound (LCB) for the true process yield. The proposed method can be used to determine whether a process meets the preset yield requirement, and make reliable decisions. Examples are provided to demonstrate the proposed methodology. 相似文献
15.
A new metaheuristic optimization algorithm, called cuckoo search (CS), was recently developed by Yang and Deb (2009, 2010). This article uses CS and Lévy flights to solve the reliability redundancy allocation problem. The redundancy allocation problem involves setting reliability objectives for components or subsystems in order to meet the resource consumption constraint, e.g. the total cost. The difficulties facing the redundancy allocation problem are to maintain feasibility with respect to three nonlinear constraints, namely, cost, weight and volume-related constraints. The redundancy allocation problems have been studied in the literature for decades, usually using mathematical programming or metaheuristic optimization algorithms. The performance of the algorithm is tested on five well-known reliability redundancy allocation problems and is compared with several well-known methods. Simulation results demonstrate that the optimal solutions obtained by CS are better than the best solutions obtained by other methods. 相似文献
16.
Rajagopalan and Irani (Some comments on Malakooti et al. ‘Integrated group technology, cell formation, process planning, and production planning with application to the emergency room’. Int. J. Prod. Res., 2006, 44, 2265--2276.) provide a critique of Malakooti et al. (Integrated group technology, cell formation, process and production planning with application to the emergency room. Int. J. Prod. Res., 2004, 42, 1769–1786.) integrated cell/process/capacity formation (ICPCF) approach and suggest an improved method for solving the ICPCF problem. Rajagopalan and Irani (2006) attempt to solve the emergency room layout problem presented in Malakooti et al. (2004) and claim to have obtained an improved solution from their approach (hybrid flowshop layout). Although there are certain advantages of considering Rajagopalan and Irani's (2006) approach, we believe that their approach for solving ICPCF problems have significant shortcomings. 相似文献
17.
Mohammad Arabi Mohammad Mehdi Faezipour Mohammad Layeghi Majid Khanali Hamid Zareahosseinabadi 《Particulate Science and Technology》2017,35(6):723-730
In this study, fluidized bed drying experiments were conducted for poplar wood particles (Populus deltoides) at temperatures ranging from 90°C to 120°C and air velocities ranging from 2.8 m s?1 to 3.3 m s?1. The initial moisture content (MC) and the bed height of the poplar wood particles were 150% (on an oven-dry basis) and 2 cm, respectively. The results showed that the drying rate increased by increasing the drying temperature and air velocity. The constant drying rate period was only observed at the early stages of the drying process and most of the drying processes were found in the falling rate period. The experimental data of the drying process were put into e11 models. Among these models, Midilli, Kucuk, and Yapar (2002) and Henderson and Pabis (1961) were found to satisfactorily describe the drying characteristics of poplar wood particles. The effective moisture diffusivity of wood particles increased from 7E-6 to 8.46E-6 and 7.65 E-6 to 1.44E-5 m2 s?1 as the drying air temperature increased from 90°C to 120°C for 2.8 m s?1 and 3.3 m s?1 of velocities, respectively. Also, the activation energies of diffusion were 34.08 kJ mol?1 and 64.70 kJ mol?1 for the air velocities of 2.8 m s?1 and 3.3 m s?1, respectively. 相似文献
18.
K. M. Assefa 《Particulate Science and Technology》2017,35(1):77-85
Bench scale tests were carried out to investigate the rheological properties of multi-sized particulate Bingham slurries at high solid concentrations ranging from 50% to 70% by weight. In addition, rheological data from Biswas et al. (2000) and Chandel et al. (2009, 2010) have also been considered. Based on these extensive amount of rheological data, an empirical model is proposed for viscosity as a function of solid volume fraction (?), maximum solid volume fraction (?m), median particle diameter (d50), and coefficient of uniformity (Cu) using optimization and nonlinear least-square curve-fitting technique. The proposed model shows good agreement with the experimental data considered in the present study and is found to be much better than the previously developed models in predicting the viscosity of multi-sized particulate Bingham slurries at high solid concentrations. 相似文献
19.
Two constitutive models representative of two well-known modeling techniques for superelastic shape-memory wires are reviewed. The first model has been proposed by Kim and Aberayatne in the framework of finite thermo-elasticity with non-convex energy [1]. In the present article this model has been modified in order to take into account the difference between elastic moduli of austenite and martensite and to introduce the isothermal approximation proposed in [1]. The second model has been developed by Auricchio et al. within the theory of irreversible thermodynamics with internal variables [2]. Both models are temperature and strain rate dependent and they take into account thermal effects. The focus in this article is on investigating how the two models compare with experimental data obtained from testing superelastic NiTi wires used in the design of a prototypal anti-seismic device [3, 4]. After model calibration and numerical implementation, numerical simulations based on the two models are compared with data obtained from uniaxial tensile tests performed at two different temperatures and various strain rates. 相似文献
20.
A. Tenorio C. M. Pereyra E. J. Martínez De La Ossa 《Particulate Science and Technology》2013,31(3):262-266
Pharmaceutical preparations are the final product of a technological process that gives the drugs the characteristics appropriate for easy administration, proper dosage, and enhancement of the therapeutic efficacy. The design of pharmaceutical preparations in nanoparticulate form has emerged as a new strategy for drug delivery (Pasquali, Bettini, and Giordano, 2006). Particle size (PS) and particle size distribution (PSD) are critical parameters that determine the rate of dissolution of the drug in the biological fluids and, hence, have a significant effect on the bioavailability of those drugs that have poor solubility in water, for which the dissolution is the rate-limiting step in the absorption process (Perrut, Jung, and Leboeuf, 2005; Van Nijlen et al., 2003). Supercritical antisolvent (SAS) processes have been widely used to precipitate active pharmaceutical ingredients (APIs) (Chattopadhyay and Gupta, 2001; Rehman et al., 2001) with a high level of purity, suitable dimensional characteristics, narrow PSD, and spherical morphologies. The SAS process is based on the particular properties of the supercritical fluids (SCFs). These fluids have diffusivities two orders of magnitude larger than those of liquids, resulting in a faster mass transfer rate SCF properties (solvent power and selectivity) can be also adjusted continuously by altering the experimental conditions (temperature and pressure). As a consequence, SCFs can be removed from the process by a simple change from the supercritical to room conditions, which avoids difficult post-treatments of waste liquid streams. Carbon dioxide (CO2) at supercritical conditions, among all possible SCFs, is largely used because of its relatively low critical temperature (31.1°C) and pressure (73.8 bar), low toxicity, and low cost. In this article, we show some results about processed antibiotics (ampicillin and amoxicillin), two of the world's most widely prescribed antibiotics, when they are dissolved in 1-methyl-2-pyrrolidone (NMP) and carbon dioxide is used as antisolvent. 相似文献