共查询到20条相似文献,搜索用时 31 毫秒
1.
The Kaushal and Tomita's (2002a) model, which has already been found satisfactory for broadly graded multisized particulate zinc tailings slurry with moderate concentration up to 26%, flow velocity up to 3.5 m/s in 105 mm diameter pipe, mean diameter 140 μm, and geometric standard deviation of particles of 4.0, is tested for concentration distribution data collected by Kaushal et al. (2005) on two sizes of glass beads, of which mean diameter and geometric standard deviation are 440 μm and 1.2, and 125 μm and 1.15, respectively, with concentration up to 50% and flow velocity up to 5 m/s in 54.9 mm diameter pipe. Kaushal and Tomita's (2002a) model gives more asymmetric concentration distributions. A modified model is proposed by alleviating some of the restrictive assumptions used in the existing model. Comparison of experimental data by Kaushal et al. (2005), Gillies and Shook (1994), and Matousek (2009) with the proposed model is satisfactory. 相似文献
2.
3.
Two- and three-dimensional simulations (created using volume of fluid-fluent) concerning the rise and interactions of two/multiple thermocapillary bubbles arranged horizontally and perpendicular to a hot surface are investigated and presented in this paper. The results indicate that thermocapillary bubble agglomeration can occur in zero gravity. Furthermore, the temperature gradient and bubble diameter were found to have a major impact on the collision between bubbles. The results of Nas and Tryggvason (1993) in their three-dimensional numerical study reported that no such collisions could occur in zero gravity and that bubbles repel each other due to the cold liquid carried between particles during migration. Their results contrast with both the present results and those recorded onboard the Chinese 22nd recoverable satellite experiment by Kang et al. (2008), who observed a total of 19 coalescences between the air bubbles injected in the direction of the temperature gradient of the stagnant heated liquid. 相似文献
4.
K. M. Assefa 《Particulate Science and Technology》2017,35(1):77-85
Bench scale tests were carried out to investigate the rheological properties of multi-sized particulate Bingham slurries at high solid concentrations ranging from 50% to 70% by weight. In addition, rheological data from Biswas et al. (2000) and Chandel et al. (2009, 2010) have also been considered. Based on these extensive amount of rheological data, an empirical model is proposed for viscosity as a function of solid volume fraction (?), maximum solid volume fraction (?m), median particle diameter (d50), and coefficient of uniformity (Cu) using optimization and nonlinear least-square curve-fitting technique. The proposed model shows good agreement with the experimental data considered in the present study and is found to be much better than the previously developed models in predicting the viscosity of multi-sized particulate Bingham slurries at high solid concentrations. 相似文献
5.
A. Tenorio C. M. Pereyra E. J. Martínez De La Ossa 《Particulate Science and Technology》2013,31(3):262-266
Pharmaceutical preparations are the final product of a technological process that gives the drugs the characteristics appropriate for easy administration, proper dosage, and enhancement of the therapeutic efficacy. The design of pharmaceutical preparations in nanoparticulate form has emerged as a new strategy for drug delivery (Pasquali, Bettini, and Giordano, 2006). Particle size (PS) and particle size distribution (PSD) are critical parameters that determine the rate of dissolution of the drug in the biological fluids and, hence, have a significant effect on the bioavailability of those drugs that have poor solubility in water, for which the dissolution is the rate-limiting step in the absorption process (Perrut, Jung, and Leboeuf, 2005; Van Nijlen et al., 2003). Supercritical antisolvent (SAS) processes have been widely used to precipitate active pharmaceutical ingredients (APIs) (Chattopadhyay and Gupta, 2001; Rehman et al., 2001) with a high level of purity, suitable dimensional characteristics, narrow PSD, and spherical morphologies. The SAS process is based on the particular properties of the supercritical fluids (SCFs). These fluids have diffusivities two orders of magnitude larger than those of liquids, resulting in a faster mass transfer rate SCF properties (solvent power and selectivity) can be also adjusted continuously by altering the experimental conditions (temperature and pressure). As a consequence, SCFs can be removed from the process by a simple change from the supercritical to room conditions, which avoids difficult post-treatments of waste liquid streams. Carbon dioxide (CO2) at supercritical conditions, among all possible SCFs, is largely used because of its relatively low critical temperature (31.1°C) and pressure (73.8 bar), low toxicity, and low cost. In this article, we show some results about processed antibiotics (ampicillin and amoxicillin), two of the world's most widely prescribed antibiotics, when they are dissolved in 1-methyl-2-pyrrolidone (NMP) and carbon dioxide is used as antisolvent. 相似文献
6.
Two constitutive models representative of two well-known modeling techniques for superelastic shape-memory wires are reviewed. The first model has been proposed by Kim and Aberayatne in the framework of finite thermo-elasticity with non-convex energy [1]. In the present article this model has been modified in order to take into account the difference between elastic moduli of austenite and martensite and to introduce the isothermal approximation proposed in [1]. The second model has been developed by Auricchio et al. within the theory of irreversible thermodynamics with internal variables [2]. Both models are temperature and strain rate dependent and they take into account thermal effects. The focus in this article is on investigating how the two models compare with experimental data obtained from testing superelastic NiTi wires used in the design of a prototypal anti-seismic device [3, 4]. After model calibration and numerical implementation, numerical simulations based on the two models are compared with data obtained from uniaxial tensile tests performed at two different temperatures and various strain rates. 相似文献
7.
Mark Freel 《Industry and innovation》2006,13(3):335-358
Employing data from a sample of 1,161 small firms, the paper draws broad comparisons between patterns of innovation expenditure and output, innovation networking, knowledge intensity and competition within Knowledge‐Intensive Business Services (KIBS; N = 563) and manufacturing firms (N = 598). In so doing, KIBS are further disaggregated along lines proposed by Miles et al. (1995). That is, as technology‐based KIBS (t‐KIBS; N = 264) and professional KIBS (p‐KIBS; N = 299). However, detailing such broad patterns is preliminary. The principal interest of the paper is in identifying the factors associated with higher levels of innovativeness, within each sector, and the extent to which such “success” factors vary across sectors. The results of the analysis appear to offer support for some widely held beliefs about the relative roles of “softer” and “harder” sources of knowledge and technology within services and manufacturing (Tether, 2004). However, some important qualifications are also apparent. 相似文献
8.
In this contribution, we present a novel polygonal finite element method applied to hyperelastic analysis. For generating polygonal meshes in a bounded period of time, we use the adaptive Delaunay tessellation (ADT) proposed by Constantinu et al. [1]. ADT is an unstructured hybrid tessellation of a scattered point set that minimally covers the proximal space around each point. In this work, we have extended the ADT to nonconvex domains using concepts from constrained Delaunay triangulation (CDT). The proposed method is thus based on a constrained adaptive Delaunay tessellation (CADT) for the discretization of domains into polygonal regions. We involve the metric coordinate (Malsch) method for obtaining the interpolation over convex and nonconvex domains. For the numerical integration of the Galerkin weak form, we resort to classical Gaussian quadrature based on triangles. Numerical examples of two-dimensional hyperelasticity are considered to demonstrate the advantages of the polygonal finite element method. 相似文献
9.
M. Z. ANABTAWI N. HILAL A. E. MUFTAH M. C. LEAPER 《Particulate Science and Technology》2013,31(4):391-403
Following on from the work of Anabtawi et al. (2003), this study examined how the volumetric liquid-phase mass transfer coefficient, k L a, of oxygen in air in three-phase spout-fluid beds was affected by varying the system parameters of bed height, bed diameter, gas velocity, and liquid velocity. The liquid used was 0.1% CMC solution, displaying a pseudo-plastic rheology, with 1.75 mm glass spheres as packing. The values of the Sherwood number were lower than in previous studies (Anabtawi et al., 2003), in the range 9,000–186,000. Gas velocity had a similar effect on k L a as in a bubble column, with results also giving good agreement with previous work on two-phase and three-phase spouted bed systems. The correlation obtained for the effect of liquid velocity on k L a compared well with that of Schumpe et al. (1989). An increase in the height of packing increased k L a to the power of 0.319, with an increase in column diameter also causing an increase in k L a, which is in agreement with the results of Akita and Yoshida (1973). 相似文献
10.
Kamlesh Jangid 《International Journal for Computational Methods in Engineering Science and Mechanics》2018,19(2):129-137
Extension of the PS model (Gao et al. [1]) in piezoelectric materials and the SEMPS model (Fan and Zhao [2]) in MEE materials, is proposed for two semi-permeable cracks in a MEE medium. It is assumed that the magnetic yielding occurs at the continuation of the cracks due to the prescribed loads. We have model these crack continuations as the zones with cohesive saturation limit magnetic induction. Stroh's formalism and complex variable techniques are used to formulate the problem. Closed form analytical expressions are derived for various fracture parameters. A numerical case study is presented for BaTiO3 ? CoFe2O4 ceramic cracked plate. 相似文献
11.
Yingxin Guo 《Dynamical Systems: An International Journal》2017,32(4):490-503
In this paper, the exponential stability of travelling waves solutions for nonlinear cellular neural networks with distribute delays in the lattice is studied. The weighted energy method and comparison principle are employed to derive the sufficient conditions under which the networks proposed are exponentially stable. Following the study [13] on the existence of the travelling wave solutions in nonlinear delayed cellular neural networks, this paper is focused on the exponential stability of these travelling wave solutions. 相似文献
12.
The problem of designing a water quality monitoring network for river systems is to find the optimal location of a finite number of monitoring devices that minimizes the expected detection time of a contaminant spill event while guaranteeing good detection reliability. When uncertainties in spill and rain events are considered, both the expected detection time and detection reliability need to be estimated by stochastic simulation. This problem is formulated as a stochastic discrete optimization via simulation (OvS) problem on the expected detection time with a stochastic constraint on detection reliability; and it is solved with an OvS algorithm combined with a recently proposed method called penalty function with memory (PFM). The performance of the algorithm is tested on the Altamaha River and compared with that of a genetic algorithm due to Telci, Nam, Guan and Aral (2009). 相似文献
13.
Expectile, first introduced by Newey and Powell in 1987 in the econometrics literature, has recently become increasingly popular in risk management and capital allocation for financial institutions due to its desirable properties such as coherence and elicitability. The current standard tool for expectile regression analysis is the multiple linear expectile regression proposed by Newey and Powell in 1987. The growing applications of expectile regression motivate us to develop a much more flexible nonparametric multiple expectile regression in a reproducing kernel Hilbert space. The resulting estimator is called KERE, which has multiple advantages over the classical multiple linear expectile regression by incorporating nonlinearity, nonadditivity, and complex interactions in the final estimator. The kernel learning theory of KERE is established. We develop an efficient algorithm inspired by majorization-minimization principle for solving the entire solution path of KERE. It is shown that the algorithm converges at least at a linear rate. Extensive simulations are conducted to show the very competitive finite sample performance of KERE. We further demonstrate the application of KERE by using personal computer price data. Supplementary materials for this article are available online. 相似文献
14.
Partial least squares (PLS) is a widely used method for prediction in applied statistics, especially in chemometrics applications. However, PLS is not invariant or equivariant under scale transformations of the predictors, which tends to limit its scope to regressions in which the predictors are measured in the same or similar units. Cook, Helland, and Su (2013) built a connection between nascent envelope methodology and PLS, allowing PLS to be addressed in a traditional likelihood-based framework. In this article, we use the connection between PLS and envelopes to develop a new method—scaled predictor envelopes (SPE)—that incorporates predictor scaling into PLS-type applications. By estimating the appropriate scales, the SPE estimators can offer efficiency gains beyond those given by PLS, and further reduce prediction errors. Simulations and an example are given to support the theoretical claims. 相似文献
15.
《Virtual and Physical Prototyping》2013,8(2):101-116
Many rapid prototyping systems which produce prototypes by layer-by-layer material deposition are now commercially available. The layer-by-layer deposition process leads to a stepped surface known as staircase. Staircase formation is a geometric constraint of the layered manufacturing, which can not be eliminated. The presence of staircase on the surface of a prototype detracts from the surface finish and hence restricts functionality of prototypes. It is realized that there is a need to make modifications in RP (rapid prototyping) systems so that prototypes with better surface finish can be produced without incurring high production costs. A virtual hybrid fused deposition modelling system (hybrid-FDM) is proposed in the present work that uses both layer-by-layer deposition and machining. In this system, CAD model is sliced adaptively using limited centre line average (Ra) value as a criterion (Pandey et al. 2003a). Hot cutter machining/ploughing (HCM) (Pandey et al. 2003b) is recommended to machine the build edges (staircase) of ABS material. Numerically controlled x?y traversing mechanism is proposed as an attachment to move hot cutters along the periphery of slices to machine build edges. In this paper, geometrical designs of cutters are proposed. A process planning system to decide the number of layers to be deposited and then machined in order to access intricate features of a part is implemented. The developed system simulates surface roughness, before and after hot cutter machining. An experimental study is carried out by machining the build edges of an axisymmetric FDM part on lathe machine to form a basis for a hybrid-FDM system. 相似文献
16.
In this paper, the problem of minimising maximum completion time on a single batch processing machine is studied. A batch processing is performed on a machine which can simultaneously process several jobs as a batch. The processing time of a batch is determined by the longest processing time of jobs in the batch. The batch processing machine problem is encountered in many manufacturing systems such as burn-in operations in the semiconductor industry and heat treatment operations in the metalworking industries. Heuristics are developed by iterative decomposition of a mixed integer programming model, modified from the successive knapsack problem by Ghazvini and Dupont (1998, Minimising mean flow times criteria on a single batch processing machine with non-identical jobs sizes. International Journal of Production Economics 55: 273–280) and the waste of batch clustering algorithm by Chen, Du, and Huang (2011, Scheduling a batch processing machine with non-identical job sizes: a clustering perspective. International Journal of Production Research 49 (19): 5755–5778). Experimental results show that the suggested heuristics produce high-quality solutions comparable to those of previous heuristics in a reasonable computation time. 相似文献
17.
Kit-Nam Francis Leung 《国际生产研究杂志》2013,51(1):66-71
The main purpose of this corrigendum is to indicate and rectify the same mistakes made by Schrady (1967), Nahmias and Rivera (1979), and Teunter (2004) in the course of solving their respective models in order that subsequent researchers will not follow the same. To this end, we derive the corresponding correct global-optimal formulae for the substitution-policy model (1,?n), with infinite or finite recovery (or called repair) rate, using differential calculus, as well as providing a closed-form expression to identify the optimal positive integral value of n recovery set-ups. In addition, we also rectify the formulae and solution procedure for numerically solving the constrained non-linear programme. 相似文献
18.
This paper considers a two-stage assembly flow shop problem where m parallel machines are in the first stage and an assembly machine is in the second stage. The objective is to minimise a weighted sum of makespan and mean completion time for n available jobs. As this problem is proven to be NP-hard, therefore, we employed an imperialist competitive algorithm (ICA) as solution approach. In the past literature, Torabzadeh and Zandieh (2010) showed that cloud theory-based simulated annealing algorithm (CSA) is an appropriate meta-heuristic to solve the problem. Thus, to justify the claim for ICA capability, we compare our proposed ICA with the reported CSA. A new parameters tuning tool, neural network, for ICA is also introduced. The computational results clarify that ICA performs better than CSA in quality of solutions. 相似文献
19.
20.
Hong-Yi Fan 《Journal of Modern Optics》2013,60(17):1819-1823
Based on the correspondence between the Collins diffraction formula (optical Fresnel transform) and the transform matrix element of a three-parameter two-mode squeezing operator in the entangled state representation 1 we further explore the relationship between output field intensity determined by the Collins formula and the input field's probability distribution along an infinitely thin phase space strip, both in the spatial domain and the frequency domain. The entangled Wigner function is introduced for recapitulating the result. 相似文献