首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new conceptual model for carbon nanoparticle formation in shock waves that is based on recent data of the temperature dependence for finite sizes of resulting particles and an abrupt increase in their refractive index during the change in particle sizes from 5 to 15 nm. The model is based on the two following physically distinct assumptions. First, the volumetric fraction of condensed carbon remains constant from complete decomposition temperatures for the initial carbon-containing molecules (1600–2000 K) up to evaporation temperature for carbon nanoparticles (3000–3500 K). Second, the surface growth rate for particles is determined by the rate of collisions between vapor molecules and particles. The proposed model allows an explanation of all observed regularities of the carbon nanoparticle growth, including a decrease in finite sizes of particles at a rise in temperature and a corresponding decrease in the time of particle formation.  相似文献   

2.
Despite a growing global appetite for sugar as both a foodstuff and a fuel source, there exists limited literature that explores sugarcane operations. In this paper, we look at the scheduling harvest and logistics operations in the state of Louisiana in the United States. These operations account for significant portions of the total sugarcane production costs. We develop an integer programming model for coordinating harvest and transport of sugarcane. The model seeks to reduce vehicle waiting time at the mill by maximising the minimum gap between two successive arrivals at the mill. To help improve tractability, we introduce valid inequalities and optimality cuts. We also demonstrate how to adapt solutions from a previous discrete-time model. Our results show that arrivals can easily be coordinated to reduce truck waiting time at the mill.  相似文献   

3.
Optimal inspection time for random fatigue crack growth is theoretically investigated based upon cost-minimization, using a diffusive crack growth model. The optimal inspection time is defined so as to minimize the expected total cost expended up to the assessment time, at which the effect of an inspection is quantified. Assuming that (i) only one inspection is done up to the assessment time, and (ii) the component is immediately replaced if a crack is detected by the inspection, we formulate the expected total cost by the use of the diffusive crack growth model, where randomness associated with the material inhomogeneity, the initial crack length and crack-detection is taken into analysis. Through numerical calculations, we derive the optimal inspection time as well as the optimal expected cost. Finally, we perform parameter sensitivity analyses, and discuss the optimal selection of the assessment time.  相似文献   

4.
This article is divided into two parts. In the first part, we review and study the properties of single-stage cross-sectional and time series benchmarking procedures that have been proposed in the literature in the context of small area estimation. We compare cross-sectional and time series benchmarking empirically, using data generated from a time series model which complies with the familiar Fay–Herriot model at any given time point. In the second part, we review cross-sectional methods proposed for benchmarking hierarchical small areas and develop a new two-stage benchmarking procedure for hierarchical time series models. The latter procedure is applied to monthly unemployment estimates in Census Divisions and States of the USA.  相似文献   

5.
Since electronic switching systems usually require very strict reliability requirements as well as good performance objectives, we need to jointly analyse the performance and reliability of switching systems. In this paper, we compare conventional time–space–time switches with single space switches with those with multiple separated space switches, from the viewpoints of reliability and performance. We consider time–space–time switching networks which consist of N incoming time switches, i.e. one NxN space switch, two (N/2)x(N/2) space switches, and four (N/4)x(N/4) space switches. We introduce a Markov reliability model to study the effect of failures and analyse the reliability and performance of three different types of switching networks in terms of average blocking probability and the mean time to unreliable operation, as we vary the offered traffic. As a result, T–S–T switching networks with multiple separated space switches exhibit better performance and reliability than those with single space switches.  相似文献   

6.
The paper deals with the nature of growth models currently used in the literature for modeling the growth of publications. It introduces briefly three growth models and explores the applicability of these models in the growth of world and Indian physics literature. The analysis suggests that the growth of Indian physics literature follows a logistic model, while the growth of world physics literature is explained by a combination of logistic and power models. The criteria for selection of growth models based on the new growth rate functions suggested by Egghe and Ravichandra Rao are given. The methodology suggested by Egghe and Ravichandra Rao is shown to work satisfactorily, except for longer time series growth data, when we may have to restore to data splitting approach, if suggested by the plots of new growth rate functions. This approach helped us to use a combination of two growth models instead of one, to explain the growth of world physics literature.  相似文献   

7.
This paper firstly presents the existence and uniqueness properties of the intersection time between two neighboring shocks or between a shock and a characteristic for the analytical shock‐fitting algorithm that was proposed to solve the Lighthill–Whitham–Richards (LWR) traffic flow model with a linear speed–density relationship in accordance with the monotonicity properties of density variations along a shock, which have greatly improved the robustness of the analytical shock‐fitting algorithm. Then we discuss the efficient evaluation of the measure of effectiveness (MOE) of the analytical shock‐fitting algorithm. We develop explicit expressions to calculate the MOE–which is the total travel time that is incurred by travelers, within the space‐time region that is encompassed by the shocks and/or characteristic lines. A numerical example is used to illustrate the effectiveness and efficiency of the proposed method compared with the numerical solutions that are obtained by a fifth‐order weighted essentially non‐oscillatory scheme. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we consider linear and non‐linear space–time fractional reaction–diffusion equations (STFRDE) on a finite domain. The equations are obtained from standard reaction–diffusion equations by replacing a second‐order space derivative by a fractional derivative of order β∈(1, 2], and a first‐order time derivative by a fractional derivative of order α∈(0, 1]. We use the Adomian decomposition method to construct explicit solutions of the linear and non‐linear STFRDE. Finally, some examples are given. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

9.
We propose a novel face tracking framework, the three‐stage model, for robust face tracking against interruptions from face‐like blobs. For robust face tracking in real‐time, we considered two critical factors in the construction of the proposed model. One factor is the exclusion of background information in the initialization of the target model, the extraction of the target candidate region, and the updating of the target model. The other factor is the robust estimation of face movement under various environmental conditions. The proposed three‐stage model consists of a preattentive stage, an assignment stage, and a postattentive stage with a prerecognition phase. The model is constructed by means of effective integration of optimum cues that are selected in consideration of the trade‐off between true positives and false positives of face classification based on a context‐dependant type of categorization. The experimental results demonstrate that the proposed tracking method improves the performance of the real‐time face tracking process in terms of success rates and with robustness against interruptions from face‐like blobs. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 321–327, 2007  相似文献   

10.
The extended finite element method (XFEM) is often used in applications that involve moving interfaces. Examples are the propagation of cracks or the movement of interfaces in two‐phase problems. This work focuses on time integration in the XFEM. The performance of the discontinuous Galerkin method in time (space–time finite elements (FEs)) and time‐stepping schemes are analyzed by convergence studies for different model problems. It is shown that space–time FE achieve optimal convergence rates. Special care is required for time stepping in the XFEM due to the time dependence of the enrichment functions. In each time step, the enrichment functions have to be evaluated at different time levels. This has important consequences in the quadrature used for the integration of the weak form. A time‐stepping scheme that leads to optimal or only slightly sub‐optimal convergence rates is systematically constructed in this work. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
This paper deals with a computational strategy suitable for the simulation of multiphysics problems and based on the LArge Time INcrement (LATIN) method. One of the main issues in the design of advanced tools for the simulation of such problems is to take into account the different time and space scales that usually arise with the different physics. Here, we focus on using different time discretizations for each physics by introducing an interface with its own discretization. The proposed application concerns the simulation of a 2‐physics problem: the fluid–structure interaction in porous media. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
In the recent decades, the recognition that uncertainty lies at the heart of modern project management has induced considerable research efforts on robust project scheduling for dealing with uncertainty in a scheduling environment. The literature generally provides two main strategies for the development of a robust predictive project schedule, namely robust resource allocation and time buffering. Yet, the previous studies seem to have neglected the potential benefits of an integration between the two. Besides, few efforts have been made to protect simultaneously the project due date and the activity start times against disruptions during execution, which is desperately demanded in practice. In this paper, we aim at constructing a proactive schedule that is not only short in time but also less vulnerable to disruptions. Firstly, a bi-objective optimisation model with a proper normalisation of the two components is proposed in the presence of activity duration variability. Then a two-stage heuristic algorithm is developed which deals with a robust resource allocation problem in the first stage and optimally determines the position and the size of time buffers using a simulated annealing algorithm in the second stage. Finally, an extensive computational experiment on the PSPLIB network instances demonstrates the superiority of the combination between resource allocation and time buffering as well as the effectiveness of the proposed two-stage algorithm for generating proactive project schedules with composite robustness.  相似文献   

13.
The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.  相似文献   

14.
A heterogeneous space–time full approximation storage (HFAS) multilevel formulation for molecular dynamics simulations is developed. The method consists of a waveform Newton smoothing that produces initial space–time iterates and a coarse model correction. The formulation is coined as heterogeneous since it permits different interatomic potentials to be applied at different physical scales. This results in a flexible framework for physics coupling. Time integration is performed in windows using the implicit Newmark predictor–corrector method that permits larger time integration steps than the explicit method. The size of the time steps is governed by accuracy rather than by stability considerations of the algorithm. We study three different variants of the method: the Picard iteration, constrained dynamics and force splitting. Numerical examples show that FAS based on force splitting provides significant time savings compared to standard explicit methods and alternative implicit space–time schemes. Parallel studies of the Picard iteration on harmonic problems illustrate the time parallelization effect that leads to a superior parallel performance compared to explicit methods. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
The response time variability problem (RTVP) is an NP-hard combinatorial scheduling problem that has been recently formalised in the literature. The RTVP has a wide range of real-life applications such as in the automobile industry, when models to be produced on a mixed-model assembly line have to be sequenced under a just-in-time production. The RTVP occurs whenever products, clients or jobs need to be sequenced so as to minimise variability in the time between the instants at which they receive the necessary resources. In two previous studies, three metaheuristic algorithms (a multi-start, a GRASP and a PSO algorithm) were proposed to solve the RTVP. We propose solving the RTVP by means of the electromagnetism-like mechanism (EM) metaheuristic algorithm. The EM algorithm is based on an analogy with the attraction-repulsion mechanism of the electromagnetism theory, where solutions are moved according to their associated charges. In this paper we compare the proposed EM metaheuristic procedure with the three metaheuristic algorithms aforementioned and it is shown that, on average, the EM procedure improves strongly on the obtained results.  相似文献   

16.
We propose a theoretical-experimental model of fatigue crack growth in hydrogen-containing media. The model is based on the regularities of exhaustion of energy accumulated in a material under the conditions of cyclic fracture and the influence of hydrogen-containing media on the mechanical characteristics of the material. By using the deformation approach of fracture mechanics, we deduce analytic dependences for the determination of the conditions of elastoplastic deformation of the material near the crack tip. The effects of crack closure and loading ratio are taken into account. The interactions of various phenomena caused by hydrogen and their general influence on the changes in the fatigue-crack growth rate are evaluated. We also compare the computed values of the fatigue-crack growth rate with experimental values for two types of steel under different conditions.  相似文献   

17.
The Japanese experience of Just-in-Time (JIT) production has shown that there are advantages and benefits associated with the efforts to reduce inventory lead time and the associated inventory cost. The length of lead time directly affects the customer service level, inventory investment in safety stock, and the competitive abilities of a business. In most of the literature dealing with inventory problems, either a deterministic model or probabilistic model, lead time is viewed as a prescribed constant or a stochastic variable, and is not subject to control. However, in many practical situations, lead time can be reduced by an additional cost. Moreover, the successful implementation of JIT production in today's supply chain enviromnent requires a new spirit of cooperation between the buyer and the vendor (Goyal and Srinivasan 1992). A desirable condition in long time purchase agreements in such a manufacturing environment is the frequent delivry of small quantities of items so as to minimize inventory holding cost for the buyer. The vendor also needs to minimize his or her total inventory costs. An integrated inventory model that allows the two trading parties to form a strategic alliance for profit sharing may prove helpful in breaking down the traditional barriers. This paper presents an integrated inventory model with controllable lead time. The model is shown to provide a lower total cost and shorter lead time compared with those of Banerjee (1986) and Goyal (1988), and is useful for practical inventory problems.  相似文献   

18.
Abstract

Previous models for grain growth are usually based on Beck's formula, which are inadequate for quantitative prediction of austenite grain growth during reheating of as cast microstructures in microalloyed steels. The applications of these empirical grain growth models are limited to some particular categories of steels, such as Nb, Nb–Ti and Ti–V microalloyed steels, etc. In this study, a metallurgically based model has been developed to predict the austenite grain growth kinetics in microalloyed steels. This model accounts for the pinning force of second phase particles on grain boundary migration, in which the mean particle size with time and temperature is calculated on the basis of the Lifshitz–Slyozov–Wagner (LSW) particle coarsening theory. The volume fraction of precipitates is obtained according to the thermodynamic model. The reliability of the model is validated by the agreement between theoretical predictions and experimental measurements in the literature.  相似文献   

19.
Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth–death-sampling model (BDM), in the context of estimating population size and birth rates in a population growing exponentially according to the birth–death branching process. For sequences sampled at a single time, we found the coalescent and the BDM gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth–death model estimators are subject to large bias if the sampling process is misspecified. Since BDMs incorporate a model of the sampling process, we show how much of the statistical power of BDMs arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information.  相似文献   

20.
The delay time concept is widely adopted in literature to model the two‐stage failure process of most industrial systems which can be divided into normal stage (from new to an initial point of a defect) and defective stage (from defect arrival point to failure). Most existing delay time models assume that the normal and defective stages are independent. A generalized delay time model is proposed in this paper by considering the dependence between the normal and defective stages which is reflected in the fact that they share the same external shock process. According to the definition of shot‐noise process, external shocks will incur random hazard rate increments in the two stages. The failure state is self‐announcing, whereas the defective state can only be detected by block‐based inspection or opportunistic inspection offered by unexpected shutdown due to unavoidable external factors. The system is correctively replaced upon the occurrence of a system failure or preventively replaced at the detection of a defective state. Based on the stochastic failure model and maintenance policy, this paper evaluates system reliability performance and average long‐run cost rate via a Markov‐chain based approach. Finally, a case study on a steel convertor plant is given to demonstrate the applicability of the proposed model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号