首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Lumpy demand is a phenomenon encountered in manufacturing or retailing when the items are slow-moving or too expensive, for example fighter plane engines. So far, the seminal procedure of Croston's (1972 Croston, JD. 1972. Stock levels for slow-moving items. Op. Res. Q., 25(1): 123130. [Taylor &; Francis Online] [Google Scholar]), with or without modifications, has been the preference for forecasting lumpy demand. Nevertheless, Croston (1974 Croston, JD. 1974. Forecasting and stock control for intermittent demand. Op. Res. Q., 23(3): 289303. [Taylor &; Francis Online] [Google Scholar]) and others, such as Venkitachalam et al . (2002 Venkitachalam, GHK, Pratt, DB, Young, CF, Morris, S and Goldstein, ML. 2002. Forecasting and inventory planning for parts with intermittent demand–a case study [http://anaconda.ecen.okstate.edu/publications/ERC2003-ADF-Final.doc] [Google Scholar]), have suggested the use of zero forecasts when the demand contains many zeros. In this paper, we put to the test this idea by doing a full factorial study comparing five forecasting methods, including all-zero, under several levels of demand lumpiness, demand variation and ordering, holding and shortage cost. We evaluate the forecasting methods by three measures of forecast error and two measures of inventory cost. We find that all-zero forecasts yield the lowest cost when lumpiness is high; is it also best for mid-lumpiness, if the shortage cost is much higher than the holding cost. We also find that the lowest forecasting error does not necessarily lead to the lowest system cost. And contrary to the assertions in Chen et al . (2000b Chen, F, Ryan, JK and Simchi-Levi, D. 2000b. The impact of exponential smoothing forecasts on the bullwhip effect. Naval Research Logistics, 42: 269286.  [Google Scholar]) and Dejonckheere et al . (2003 Dejonckheere, J, Disney, SM, Lambrecht, MR and Towill, DR. 2003. Measuring the Bullwhip Effect: a Control Theoretic Approach to Analyse Forecasting Induced Bullwhip in Order-Up-To Policies. Eur. J. Op. Res., 147(3): 567590. [Crossref], [Web of Science ®] [Google Scholar], 2004 Dejonckheere, J, Disney, SM, Lambrecht, MR and Towill, DR. 2004. The impact of information enrichment on the bullwhip effect in supply chains: a control theoretic approach. Eur. J. Op. Res., 153(3): 727750. [Crossref], [Web of Science ®] [Google Scholar]), our factorial experiment reinforces the intuition that simple exponential smoothing is superior to an equivalent moving average.  相似文献   

2.
An error appearing in equation (3) of Y.L. Zhang (J. Appl. Prob., 1994, 31, 1123–1127) has been pointed out by S.H. Sheu (Eur. J. Oper. Res., 1999, 112, 503–516) and the correct expressions (25)–(27) given accordingly on pp. 510–511. However, the derivation of the key expression (27), the long-run expected loss rate, was not presented. The purpose of this note is threefold. First, since a monotone process (e.g. an arithmetic, geometric, or arithmetic–geometric process) approach, as discussed by K.N.F. Leung (Eng. Optimiz., 2001, 33, 473–484), is considered to be relevant, realistic, and appropriate to the modelling of a deteriorating system maintenance problem, it is worth explicitly developing this expression, which is of benefit to the subsequent studies. Secondly, equation (3) in Zhang (1994) Zhang, Y. L. 1994. A bivariate optimal replacement policy for a repairable system. J. Appl. Prob., 31: 11231127. [Crossref], [Web of Science ®] [Google Scholar] is shown to be fundamentally correct and so it can be viewed as an alternative method of formulating similar bivariate cases. Thirdly, although equations (4) and (5) in Zhang (1994) Zhang, Y. L. 1994. A bivariate optimal replacement policy for a repairable system. J. Appl. Prob., 31: 11231127. [Crossref], [Web of Science ®] [Google Scholar] have been logically and correctly derived, both can be readily reduced to their simplest forms which are derived here.  相似文献   

3.
The main purpose of this corrigendum is to indicate and rectify the same mistakes made by Schrady (1967 Schrady, D. A. 1967. “A Deterministic Inventory Model for Repairable Items.” Naval Research Logistics Quarterly 14: 391398.[Crossref] [Google Scholar]), Nahmias and Rivera (1979 Nahmias, S., and H. Rivera. 1979. “A Deterministic Model for a Repairable Item Inventory System with a Finite Repair Rate.” International Journal of Production Research 17: 215221.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and Teunter (2004 Teunter, R. H. 2004. “Lot-Sizing for Inventory Systems with Product Recovery.” Computers and Industrial Engineering 46: 431441.[Crossref], [Web of Science ®] [Google Scholar]) in the course of solving their respective models in order that subsequent researchers will not follow the same. To this end, we derive the corresponding correct global-optimal formulae for the substitution-policy model (1,?n), with infinite or finite recovery (or called repair) rate, using differential calculus, as well as providing a closed-form expression to identify the optimal positive integral value of n recovery set-ups. In addition, we also rectify the formulae and solution procedure for numerically solving the constrained non-linear programme.  相似文献   

4.
A strategy is presented to obtain production sequences resulting in minimal tooling replacements. An objective function is employed to distribute the tool wear as evenly as possible throughout the sequence. This objective function is an extension of Miltenburg's earlier work (1989 Miltenburg, J. 1989. Level schedules for mixed-model assembly lines in just-in-time production systems. Manage. Sci., 35: 192207. [Crossref], [Web of Science ®] [Google Scholar]) concerned with obtaining production sequences while evenly distributing the satisfaction of demand. Smaller problems are solved to optimality, while larger problems are solved as close as possible to optimality. The production sequences are simulated to estimate required tooling replacements. The methodology presented here consistently results in fewer tooling replacements when compared with earlier published work (McMullen et al. 2002 McMullen, PR, Clark, M, Bell, J and Albritton, D. 2002. A correlation and heuristic approach to production sequences with uniformity of tool wear. Comp.?&;?Op. Res., 30: 435454.  [Google Scholar], McMullen 2003 McMullen, PR. 2003. Sequencing for minimal tooling replacements via a variety of objective functions. Int. J. Prod. Res., 41: 21832199. [Taylor &; Francis Online], [Web of Science ®] [Google Scholar]).  相似文献   

5.
The effects of changing a unit time length of a planning horizon from a month to a week on the optimum planning horizon were examined by calculating the optimum planning horizon through the methods proposed by Nagasawa, Nishiyama, and Hitomi (1982). It was found that the optimum planning horizon decreased by 20-30% in calendar time when the unit time length was changed from a month (monthly scheduling) to a week (weekly scheduling). However, the degree of this decrease was much smaller than the (65% shown by Bernardo (1978 BERNARDO , J. J. , 1978 ], The effect of inventory and production costs on the optimal planning horizon , International Journal of Production Research , 16 , 103 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and it followed that the optimum planning horizon largely increased in the number of periods with this change of the unit time length. It was also clarified that the large amount of the decrease shown by Bernardo was derived on the basis of the erroneous analysis of the relation between cost coefficients and the unit time length. Consequently, weekly scheduling was not always preferred to monthly scheduling  相似文献   

6.
In this paper, the problem of minimising maximum completion time on a single batch processing machine is studied. A batch processing is performed on a machine which can simultaneously process several jobs as a batch. The processing time of a batch is determined by the longest processing time of jobs in the batch. The batch processing machine problem is encountered in many manufacturing systems such as burn-in operations in the semiconductor industry and heat treatment operations in the metalworking industries. Heuristics are developed by iterative decomposition of a mixed integer programming model, modified from the successive knapsack problem by Ghazvini and Dupont (1998 Ghazvini, F.J. and Dupont, L. 1998. Minimising mean flow times criteria on a single batch processing machine with non-identical jobs sizes. International Journal of Production Economics, 55: 273280. [Crossref], [Web of Science ®] [Google Scholar], Minimising mean flow times criteria on a single batch processing machine with non-identical jobs sizes. International Journal of Production Economics 55: 273–280) and the waste of batch clustering algorithm by Chen, Du, and Huang (2011 Chen, H., Du, B. and Huang, G.Q. 2011. Scheduling a batch processing machine with non-identical job sizes: a clustering perspective. International Journal of Production Research, 49(19): 57555778. [Taylor &; Francis Online], [Web of Science ®] [Google Scholar], Scheduling a batch processing machine with non-identical job sizes: a clustering perspective. International Journal of Production Research 49 (19): 5755–5778). Experimental results show that the suggested heuristics produce high-quality solutions comparable to those of previous heuristics in a reasonable computation time.  相似文献   

7.
This paper presents a corrected formulation to the mixed integer programming model of the double-row layout problem (DRLP), first proposed by Chung and Tanchoco (2010 Chung, J and Tanchoco, J. 2010. The double row layout problem. International Journal of Production Research, 48(3): 709727. [Taylor &; Francis Online], [Web of Science ®] [Google Scholar], The double row layout problem. International Journal of Production Research, 48 (3), 709–727). In the DRLP, machines are placed along two rows of a corridor, where the objective is to minimise the total cost of material handling for products that move between these machines. We highlight the errors in the original formulation, propose corrections to the formulation, and provide an analytical validation of the corrections.  相似文献   

8.
This paper aims to compare two tools for decision makers that intend to support the decision of the selection of the appropriate supplier. Suppliers are crucial to both the efficiency and effectiveness of the performance of companies. A critical success factor of these companies is the selection of the appropriate supplier. A methodology is proposed to optimise the evaluation process based on different criteria. The proposed approach extends the one proposed by Ordoobadi (2009 Ordoobadi, SM. 2009. Development of a supplier selection model using fuzzy logic. Supply Chain Management: An International Journal, 14(4): 314327. [Crossref], [Web of Science ®] [Google Scholar], Development of a supplier selection model using fuzzy logic. Supply Chain Management: An International Journal, 14 (4), 314–327) who proposed the application of fuzzy logic (FL) where we use the same example case study in order to compare the analytic hierarch process (AHP) with FL. In this paper we demonstrate how we can achieve the same objective of expressing human assessments in the form of linguistic expressions by using AHP. Moreover, we demonstrate the capability to run a sensitivity analysis which helps to understand the causal relationships among the different factors. We demonstrate how this capability can help us to explain and predict the different relationships among criteria and alternatives. Moreover, we provide a measure that is able to capture the consistency of the decision maker's preferences. In our approach we provide a single unit of scale that is not only capable of ranking suppliers but also provides an understanding of the difference in scale between different suppliers which can then help to allocate resources accordingly. These facilities are not offered by Ordoobadi (2009 Ordoobadi, SM. 2009. Development of a supplier selection model using fuzzy logic. Supply Chain Management: An International Journal, 14(4): 314327. [Crossref], [Web of Science ®] [Google Scholar]). The proposed approach here can help companies to identify the best supplier in changing environments. The paper describes a decision model that incorporates a decision maker's subjective assessments and applies a multiple criteria decision making technique to manipulate and quantify these assessments. Unlike many similar studies, two techniques have been performed on the same case study in order to improve our understanding of the differences in the proposed techniques.  相似文献   

9.
As Pfeffer (1993 Pfeffer, J. 1993. Barriers to the advance of organizational science: paradigm development as a dependable variable. Academy of Management Review, 18(4): 599620. [Crossref], [Web of Science ®] [Google Scholar]) states that until agreement is reached on a subject, progress may be slow. This paper converges the discussions on social capital in the operations management literature by way of a systematic literature review of 3- and 4-star journals. Human resource management, voluntary work and entrepreneurship were identified as minor themes within the review and thus potentially underexplored areas. Quality management, project management and new product development show significant use of social capital and particularly the role of social capital in the intrafirm environment. Finally, supply chain management shows the most significant use of social capital, particularly in explaining the characteristics of buyer–supplier relationships and how these impact inter-firm performance. Areas of future research are presented that draw on all forms of social capital to explore how they may be affect by contextual factors. The paper concludes by proposing a conceptual model of social capital for use within operations management.  相似文献   

10.
Fatigue-induced damage is often progressive and gradual in nature. Fatigue is often deteriorated by corrosion in ageing structures, creating maintenance problems, and even causing catastrophic failure. This ushers the development of structural health monitoring (SHM) and nondestructive evaluation (NDE) systems. Recent advent of smart materials applicable in SHM alleviates the shortcomings of the conventional techniques. Autonomous, real-time, remote monitoring becomes possible with the use of smart piezoelectric transducers. For instance, the electro-mechanical impedance (EMI) technique, employing piezoelectric transducers as collocated actuators and sensors, is known for its ability in damage detection and characterization. This article presents a series of lab-scale experimental tests and analysis to investigate the feasibility of fatigue crack detection and characterization employing the EMI technique. This study extends the work by Lim and Soh [1 Y. Y. Lim and C. K. Soh . Smart Materials and Structures 20 : 125001 ( 2011 ).[Crossref], [Web of Science ®] [Google Scholar]] to incorporate the phases involving crack initiation and critical crack. It is suggested that the EMI technique is effective in characterizing fatigue induced cracking, even in its incipient stage. Micro-crack invisible to the naked eyes can be detected by the technique especially when employing the higher frequency range of 100–200 kHz. A quick and handy qualitative-based critical crack identification method is also suggested by visually inspecting the admittance frequency spectrum.  相似文献   

11.
A new algorithm is proposed for topology optimization based on a fluid dynamics analogy. It possesses characteristics similar to most well-known methods, such as the Evolutionary Structural Optimization (ESO)/Bidirectional Evolutionary Structural Optimization (BESO) method due to Xie and Steven (1993 Xie, Y. M., and G.P. Steven. 1993. “A Simple Evolutionary Procedure for Structural Optimisation.” Computers and Structures 49 (5): 885896. doi: 10.1016/0045-7949(93)90035-C[Crossref], [Web of Science ®] [Google Scholar], “A Simple Evolutionary Procedure for Structural Optimisation.” Computers and Structures 49 (5): 885–896.), which works with discrete values, and the Solid Isotropic Material with Penalization (SIMP) method due to Bendsøe (1989 Bendsøe, M.P. 1989. “Optimal Shape Design as a Material Distribution Problem.” Structural Optimization 1 (4): 193202. doi: 10.1007/BF01650949[Crossref] [Google Scholar], “Optimal Shape Design as aMaterial Distribution Problem.” Structural Optimization 1 (4): 193–202.) and Zhou and Rozvany (1991 Zhou, M., and G.I.N. Rozvany. 1991. “The COC Algorithm—Part II: Topological, Geometry and Generalized Shape Optimization.” Computer Methods in Applied Mechanics and Engineering 89 (1–3): 309336. doi: 10.1016/0045-7825(91)90046-9[Crossref], [Web of Science ®] [Google Scholar], “The COCAlgorithm–Part II: Topological, Geometry and Generalized Shape Optimization.” Computer Methods in Applied Mechanics and Engineering 89 (1–3): 309–336.) (using Optimality Criterion (OC) or Method of Moving Asymptotes (MMA)), which works with intermediate values, as it is able to work both with discrete and intermediate densities, but always yields a solution with discrete densities. It can be proven mathematically that the new method is a generalization of the BESO method and using appropriate parameters it will operate exactly as the BESO method. The new method is less sensitive to rounding errors of the matrix solver as compared to the BESO method and is able to give alternative topologies to well-known problems. The article presents the basic idea and the optimization algorithm, and compares the results of three cantilever optimizations to the results of the SIMP and BESO methods.  相似文献   

12.
In this paper, a simulated annealing approach is developed for the parallel mixed-model assembly line balancing and model sequencing (PMMAL/BS) problem which is an extension of the parallel assembly line balancing (PALB) problem introduced by Gökçen et al. (2006 Gökçen, H and A?pak, K. 2006. A goal programming approach to simple U-line balancing problem. European Journal of Operational Research, 171(2): 577585. [Crossref], [Web of Science ®] [Google Scholar]). In PALB, the aim is to balance more than one assembly line together. Balancing of the lines simultaneously with a common resource is very important in terms of resource minimisation. The proposed approach maximises the line efficiency and distributes the workloads smoothly across stations. The proposed approach is illustrated with two numerical examples and its performance is tested on a set of test problems. The computational results show that the proposed approach is very effective for PMMAL/BS.  相似文献   

13.
This paper considers a two-stage assembly flow shop problem where m parallel machines are in the first stage and an assembly machine is in the second stage. The objective is to minimise a weighted sum of makespan and mean completion time for n available jobs. As this problem is proven to be NP-hard, therefore, we employed an imperialist competitive algorithm (ICA) as solution approach. In the past literature, Torabzadeh and Zandieh (2010 Torabzadeh, E., and M. Zandieh. 2010. “Cloud theory-based Simulated Annealing Approach for Scheduling in the Two-stage Assembly Flow Shop.” Advances in Engineering Software 41: 12381243.[Crossref], [Web of Science ®] [Google Scholar]) showed that cloud theory-based simulated annealing algorithm (CSA) is an appropriate meta-heuristic to solve the problem. Thus, to justify the claim for ICA capability, we compare our proposed ICA with the reported CSA. A new parameters tuning tool, neural network, for ICA is also introduced. The computational results clarify that ICA performs better than CSA in quality of solutions.  相似文献   

14.
Process yield is an important criterion used in the manufacturing industry for measuring process performance. Methods for measuring yield for processes with single characteristic have been investigated extensively. However, methods for measuring yield for processes with multiple characteristics have been comparatively neglected. In this paper, we develop a generalized yield index, called TS pk,PC , based on the index Spk introduced by Boyles (Journal of Quality Technology, 23, 17–26, 1991 Boyles, RA. 1991. The Taguchi capability index. J. Qual. Technol., 23: 1726. [Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) using the principal component analysis (PCA) technique. We obtained a lower confidence bound (LCB) for the true process yield. The proposed method can be used to determine whether a process meets the preset yield requirement, and make reliable decisions. Examples are provided to demonstrate the proposed methodology.  相似文献   

15.
Ehsan Valian  Elham Valian 《工程优选》2013,45(11):1273-1286
A new metaheuristic optimization algorithm, called cuckoo search (CS), was recently developed by Yang and Deb (2009 Yang, X.S. and Deb, S., 2009. Cuckoo search via Lévy flights. In: Proceedings of world congress on nature &; biologically inspired computing (NaBIC 2009), 9–11 December 2009, Coimbatore, India. Piscataway, NJ: IEEE Press, 210214.[Crossref] [Google Scholar], 2010 Yang, X.S. and Deb, S., 2010. Engineering optimisation by cuckoo search. International Journal of Mathematical Modelling and Numerical Optimisation, 1, 330343. doi: 10.1504/IJMMNO.2010.035430[Crossref] [Google Scholar]). This article uses CS and Lévy flights to solve the reliability redundancy allocation problem. The redundancy allocation problem involves setting reliability objectives for components or subsystems in order to meet the resource consumption constraint, e.g. the total cost. The difficulties facing the redundancy allocation problem are to maintain feasibility with respect to three nonlinear constraints, namely, cost, weight and volume-related constraints. The redundancy allocation problems have been studied in the literature for decades, usually using mathematical programming or metaheuristic optimization algorithms. The performance of the algorithm is tested on five well-known reliability redundancy allocation problems and is compared with several well-known methods. Simulation results demonstrate that the optimal solutions obtained by CS are better than the best solutions obtained by other methods.  相似文献   

16.
Rajagopalan and Irani (Some comments on Malakooti et al. ‘Integrated group technology, cell formation, process planning, and production planning with application to the emergency room’. Int. J. Prod. Res., 2006, 44, 2265--2276.) provide a critique of Malakooti et al. (Integrated group technology, cell formation, process and production planning with application to the emergency room. Int. J. Prod. Res., 2004, 42, 1769–1786.) integrated cell/process/capacity formation (ICPCF) approach and suggest an improved method for solving the ICPCF problem. Rajagopalan and Irani (2006 Rajagopalan, RIrani, SA. 2006. Integrated group technology, cell formation, process planning, and production planning with application to the emergency room. Int. J. Prod. Res., 44: 22652276. Some comments on Malakooti[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) attempt to solve the emergency room layout problem presented in Malakooti et al. (2004) and claim to have obtained an improved solution from their approach (hybrid flowshop layout). Although there are certain advantages of considering Rajagopalan and Irani's (2006 Rajagopalan, RIrani, SA. 2006. Integrated group technology, cell formation, process planning, and production planning with application to the emergency room. Int. J. Prod. Res., 44: 22652276. Some comments on Malakooti[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) approach, we believe that their approach for solving ICPCF problems have significant shortcomings.  相似文献   

17.
In this study, fluidized bed drying experiments were conducted for poplar wood particles (Populus deltoides) at temperatures ranging from 90°C to 120°C and air velocities ranging from 2.8 m s?1 to 3.3 m s?1. The initial moisture content (MC) and the bed height of the poplar wood particles were 150% (on an oven-dry basis) and 2 cm, respectively. The results showed that the drying rate increased by increasing the drying temperature and air velocity. The constant drying rate period was only observed at the early stages of the drying process and most of the drying processes were found in the falling rate period. The experimental data of the drying process were put into e11 models. Among these models, Midilli, Kucuk, and Yapar (2002 Midilli, A., H. Kucuk, and Z. Yapar. 2002. A new model for single-layer drying. Drying Technology 20:150313. doi:10.1081/DRT-120005864[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and Henderson and Pabis (1961 Henderson, S. M., and S. Pabis. 1961. Grain drying theory I. Temperature effect on drying coefficient. Journal of Agriculture Engineering Research 6 (3):16974. [Google Scholar]) were found to satisfactorily describe the drying characteristics of poplar wood particles. The effective moisture diffusivity of wood particles increased from 7E-6 to 8.46E-6 and 7.65 E-6 to 1.44E-5 m2 s?1 as the drying air temperature increased from 90°C to 120°C for 2.8 m s?1 and 3.3 m s?1 of velocities, respectively. Also, the activation energies of diffusion were 34.08 kJ mol?1 and 64.70 kJ mol?1 for the air velocities of 2.8 m s?1 and 3.3 m s?1, respectively.  相似文献   

18.
Bench scale tests were carried out to investigate the rheological properties of multi-sized particulate Bingham slurries at high solid concentrations ranging from 50% to 70% by weight. In addition, rheological data from Biswas et al. (2000 Biswas, A., B. K. Gandhi, S. N. Singh, and V. Seshari. 2000. Characteristics of coal ash and their role in hydraulic design of ash disposal pipelines. Indian Journal of Engineering and Materials Sciences 7:17.[Web of Science ®] [Google Scholar]) and Chandel et al. (2009, 2010) have also been considered. Based on these extensive amount of rheological data, an empirical model is proposed for viscosity as a function of solid volume fraction (?), maximum solid volume fraction (?m), median particle diameter (d50), and coefficient of uniformity (Cu) using optimization and nonlinear least-square curve-fitting technique. The proposed model shows good agreement with the experimental data considered in the present study and is found to be much better than the previously developed models in predicting the viscosity of multi-sized particulate Bingham slurries at high solid concentrations.  相似文献   

19.
Two constitutive models representative of two well-known modeling techniques for superelastic shape-memory wires are reviewed. The first model has been proposed by Kim and Aberayatne in the framework of finite thermo-elasticity with non-convex energy [1 S.-J. Kim and R. Abeyaratne, On the effect of the heat generated during a stress-induced thermoelastic phase transformation, Continuum Mech. Thermodyn., vol. 7, pp. 311332, 1995.[Crossref], [Web of Science ®] [Google Scholar]]. In the present article this model has been modified in order to take into account the difference between elastic moduli of austenite and martensite and to introduce the isothermal approximation proposed in [1 S.-J. Kim and R. Abeyaratne, On the effect of the heat generated during a stress-induced thermoelastic phase transformation, Continuum Mech. Thermodyn., vol. 7, pp. 311332, 1995.[Crossref], [Web of Science ®] [Google Scholar]]. The second model has been developed by Auricchio et al. within the theory of irreversible thermodynamics with internal variables [2 F. Auricchio, D. Fugazza, and R. DesRoches, Rate-dependent thermo-mechanical modelling of superelastic shape-memory alloys for seismic applications, J. Intell. Mater. Syst. Struct., vol. 19, pp. 4761, 2008.[Crossref], [Web of Science ®] [Google Scholar]]. Both models are temperature and strain rate dependent and they take into account thermal effects. The focus in this article is on investigating how the two models compare with experimental data obtained from testing superelastic NiTi wires used in the design of a prototypal anti-seismic device [3 M. Indirli and M.G. Castellano, Shape memory alloy devices for the structural improvement of masonry heritage structures, Int. J. Archit. Heritage, vol. 2, no. 2, pp. 93119, 2008.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 4 A. Chiozzi, M. Merlin, R. Rizzoni, and A. Tralli, Experimental comparison for two one-dimensional constitutive models for shape memory alloy wires used in anti-seismic applications. In: European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2012), e-Book Full Papers, pp. 46724682, 2012. [Google Scholar]]. After model calibration and numerical implementation, numerical simulations based on the two models are compared with data obtained from uniaxial tensile tests performed at two different temperatures and various strain rates.  相似文献   

20.
Pharmaceutical preparations are the final product of a technological process that gives the drugs the characteristics appropriate for easy administration, proper dosage, and enhancement of the therapeutic efficacy. The design of pharmaceutical preparations in nanoparticulate form has emerged as a new strategy for drug delivery (Pasquali, Bettini, and Giordano, 2006 Pasquali , I. , R. Bettini , and F. Giordano . 2006 . Solid-state chemistry and particle engineering with supercritical fluids in pharmaceutics . Eur. J. Pharm. Sci. 27 : 299310 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). Particle size (PS) and particle size distribution (PSD) are critical parameters that determine the rate of dissolution of the drug in the biological fluids and, hence, have a significant effect on the bioavailability of those drugs that have poor solubility in water, for which the dissolution is the rate-limiting step in the absorption process (Perrut, Jung, and Leboeuf, 2005 Perrut , M. , J. Jung , and F. Leboeuf . 2005 . Enhancement of dissolution rate of poorly-soluble active ingredients by supercritical fluid processes: Part I: Micronization of neat particles . Int. J. Pharm. 288 : 310 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]; Van Nijlen et al., 2003 Van Nijlen , T., G. Van Den Mooter , R. Kinget , P. Augustijns , N. Blaton , and K. Brennan . 2003 . Improvement of the dissolution rate of artemisinin by means of supercritical fluid technology and solid dispersions . Int. J. Pharm. 254 : 173181 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). Supercritical antisolvent (SAS) processes have been widely used to precipitate active pharmaceutical ingredients (APIs) (Chattopadhyay and Gupta, 2001 Chattopadhyay , P. , and R. B. Gupta . 2001 . Production of antibiotic nanoparticles using supercritical CO2 as antisolvent with enhanced mass transfer . Ind. Eng. Chem. Res. 40 : 35303539 .[Crossref], [Web of Science ®] [Google Scholar]; Rehman et al., 2001 Rehman , M. , B. Y. Shekunov , P. York , and P. Colthorpe . 2001 . Solubility and precipitation of nicotinic acid in supercritical carbon dioxide . J. Pharm. Sci. 90 : 15701582 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) with a high level of purity, suitable dimensional characteristics, narrow PSD, and spherical morphologies. The SAS process is based on the particular properties of the supercritical fluids (SCFs). These fluids have diffusivities two orders of magnitude larger than those of liquids, resulting in a faster mass transfer rate SCF properties (solvent power and selectivity) can be also adjusted continuously by altering the experimental conditions (temperature and pressure). As a consequence, SCFs can be removed from the process by a simple change from the supercritical to room conditions, which avoids difficult post-treatments of waste liquid streams. Carbon dioxide (CO2) at supercritical conditions, among all possible SCFs, is largely used because of its relatively low critical temperature (31.1°C) and pressure (73.8 bar), low toxicity, and low cost. In this article, we show some results about processed antibiotics (ampicillin and amoxicillin), two of the world's most widely prescribed antibiotics, when they are dissolved in 1-methyl-2-pyrrolidone (NMP) and carbon dioxide is used as antisolvent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号