首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Downarowicz and Maass [7 T. Downarowicz and A. Maass, Finite-rank Bratteli–Vershik diagrams are expansive, Ergod. Th. Dynam. Sys. 28 (2008), pp. 739747.[Web of Science ®] [Google Scholar]] proposed topological ranks for all homeomorphic Cantor minimal dynamical systems using properly ordered Bratteli diagrams. In this study, we adopt this definition to the case of the essentially minimal zero-dimensional systems. We consider the cases in which topological ranks are 2 and unique minimal sets are fixed points. Akin and Kolyada [2 E. Akin and S. Kolyada, Li–Yorke sensitivity, Nonlinearity 16(4) (2003), pp. 14211433.[Crossref], [Web of Science ®] [Google Scholar]], had shown that if the unique minimal set of an essentially minimal system is a fixed point, then the system must be proximal. The finite topological rank implies expansiveness; furthermore, in the case of proximal Cantor systems with topological rank 2, the expansiveness is always from the lowest degree. Rank 2 proximal Cantor systems are residually scrambled. We present a necessary and sufficient condition for the unique ergodicity of these systems. In addition, we show that the number of ergodic measures of the systems that are topologically mixing can be 1 and 2. Moreover, we present examples that are topologically weakly mixing, not topologically mixing, and uniquely ergodic. Finally, we show that the number of ergodic measures of the systems that are not weakly mixing can be 1 and 2.  相似文献   

2.
Two constitutive models representative of two well-known modeling techniques for superelastic shape-memory wires are reviewed. The first model has been proposed by Kim and Aberayatne in the framework of finite thermo-elasticity with non-convex energy [1 S.-J. Kim and R. Abeyaratne, On the effect of the heat generated during a stress-induced thermoelastic phase transformation, Continuum Mech. Thermodyn., vol. 7, pp. 311332, 1995.[Crossref], [Web of Science ®] [Google Scholar]]. In the present article this model has been modified in order to take into account the difference between elastic moduli of austenite and martensite and to introduce the isothermal approximation proposed in [1 S.-J. Kim and R. Abeyaratne, On the effect of the heat generated during a stress-induced thermoelastic phase transformation, Continuum Mech. Thermodyn., vol. 7, pp. 311332, 1995.[Crossref], [Web of Science ®] [Google Scholar]]. The second model has been developed by Auricchio et al. within the theory of irreversible thermodynamics with internal variables [2 F. Auricchio, D. Fugazza, and R. DesRoches, Rate-dependent thermo-mechanical modelling of superelastic shape-memory alloys for seismic applications, J. Intell. Mater. Syst. Struct., vol. 19, pp. 4761, 2008.[Crossref], [Web of Science ®] [Google Scholar]]. Both models are temperature and strain rate dependent and they take into account thermal effects. The focus in this article is on investigating how the two models compare with experimental data obtained from testing superelastic NiTi wires used in the design of a prototypal anti-seismic device [3 M. Indirli and M.G. Castellano, Shape memory alloy devices for the structural improvement of masonry heritage structures, Int. J. Archit. Heritage, vol. 2, no. 2, pp. 93119, 2008.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 4 A. Chiozzi, M. Merlin, R. Rizzoni, and A. Tralli, Experimental comparison for two one-dimensional constitutive models for shape memory alloy wires used in anti-seismic applications. In: European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2012), e-Book Full Papers, pp. 46724682, 2012. [Google Scholar]]. After model calibration and numerical implementation, numerical simulations based on the two models are compared with data obtained from uniaxial tensile tests performed at two different temperatures and various strain rates.  相似文献   

3.
Extension of the PS model (Gao et al. [1 H. Gao, T. Y. Zhang, and P. Tong, “Local and global energy release rates for an electrically yielded crack in a piezoelectric ceramic,” J. Mech. Phy. Solids, vol. 45, pp. 491510, 1997.[Crossref], [Web of Science ®] [Google Scholar]]) in piezoelectric materials and the SEMPS model (Fan and Zhao [2 C. Y. Fan and M. H. Zhao, “Nonlinear fracture of 2-D magnetoelectroelastic media: analytical and numerical solutions,” Int. J. Solids Struct., vol. 48, pp. 23832392, 2011.[Crossref], [Web of Science ®] [Google Scholar]]) in MEE materials, is proposed for two semi-permeable cracks in a MEE medium. It is assumed that the magnetic yielding occurs at the continuation of the cracks due to the prescribed loads. We have model these crack continuations as the zones with cohesive saturation limit magnetic induction. Stroh's formalism and complex variable techniques are used to formulate the problem. Closed form analytical expressions are derived for various fracture parameters. A numerical case study is presented for BaTiO3 ? CoFe2O4 ceramic cracked plate.  相似文献   

4.
In this paper, the exponential stability of travelling waves solutions for nonlinear cellular neural networks with distribute delays in the lattice is studied. The weighted energy method and comparison principle are employed to derive the sufficient conditions under which the networks proposed are exponentially stable. Following the study [13 X. Liu, P. Weng, and Z. Xu, Existence of traveling wave solutions in nonlinear delayed cellular neural networks, Nonlinear Anal. Real World Appl. 10(1) (2009), pp. 277286.[Crossref], [Web of Science ®] [Google Scholar]] on the existence of the travelling wave solutions in nonlinear delayed cellular neural networks, this paper is focused on the exponential stability of these travelling wave solutions.  相似文献   

5.
In this contribution, we present a novel polygonal finite element method applied to hyperelastic analysis. For generating polygonal meshes in a bounded period of time, we use the adaptive Delaunay tessellation (ADT) proposed by Constantinu et al. [1 A. Constantinu, P. Steinmann, T. Bobach, G. Farin, and G. Umlauf, The adaptive Delaunay tessellation: A neighborhood covering meshing technique., Comput. Mech., vol. 42, no. 5, pp. 655669, 2008.[Crossref], [Web of Science ®] [Google Scholar]]. ADT is an unstructured hybrid tessellation of a scattered point set that minimally covers the proximal space around each point. In this work, we have extended the ADT to nonconvex domains using concepts from constrained Delaunay triangulation (CDT). The proposed method is thus based on a constrained adaptive Delaunay tessellation (CADT) for the discretization of domains into polygonal regions. We involve the metric coordinate (Malsch) method for obtaining the interpolation over convex and nonconvex domains. For the numerical integration of the Galerkin weak form, we resort to classical Gaussian quadrature based on triangles. Numerical examples of two-dimensional hyperelasticity are considered to demonstrate the advantages of the polygonal finite element method.  相似文献   

6.
Expectile, first introduced by Newey and Powell in 1987 Newey, W. K., and Powell, J. L. (1987), “Asymmetric Least Squares Estimation and Testing,” Econometrica, 55, 819847.[Crossref], [Web of Science ®] [Google Scholar] in the econometrics literature, has recently become increasingly popular in risk management and capital allocation for financial institutions due to its desirable properties such as coherence and elicitability. The current standard tool for expectile regression analysis is the multiple linear expectile regression proposed by Newey and Powell in 1987 Newey, W. K., and Powell, J. L. (1987), “Asymmetric Least Squares Estimation and Testing,” Econometrica, 55, 819847.[Crossref], [Web of Science ®] [Google Scholar]. The growing applications of expectile regression motivate us to develop a much more flexible nonparametric multiple expectile regression in a reproducing kernel Hilbert space. The resulting estimator is called KERE, which has multiple advantages over the classical multiple linear expectile regression by incorporating nonlinearity, nonadditivity, and complex interactions in the final estimator. The kernel learning theory of KERE is established. We develop an efficient algorithm inspired by majorization-minimization principle for solving the entire solution path of KERE. It is shown that the algorithm converges at least at a linear rate. Extensive simulations are conducted to show the very competitive finite sample performance of KERE. We further demonstrate the application of KERE by using personal computer price data. Supplementary materials for this article are available online.  相似文献   

7.
8.
Bench scale tests were carried out to investigate the rheological properties of multi-sized particulate Bingham slurries at high solid concentrations ranging from 50% to 70% by weight. In addition, rheological data from Biswas et al. (2000 Biswas, A., B. K. Gandhi, S. N. Singh, and V. Seshari. 2000. Characteristics of coal ash and their role in hydraulic design of ash disposal pipelines. Indian Journal of Engineering and Materials Sciences 7:17.[Web of Science ®] [Google Scholar]) and Chandel et al. (2009, 2010) have also been considered. Based on these extensive amount of rheological data, an empirical model is proposed for viscosity as a function of solid volume fraction (?), maximum solid volume fraction (?m), median particle diameter (d50), and coefficient of uniformity (Cu) using optimization and nonlinear least-square curve-fitting technique. The proposed model shows good agreement with the experimental data considered in the present study and is found to be much better than the previously developed models in predicting the viscosity of multi-sized particulate Bingham slurries at high solid concentrations.  相似文献   

9.
Lumpy demand is a phenomenon encountered in manufacturing or retailing when the items are slow-moving or too expensive, for example fighter plane engines. So far, the seminal procedure of Croston's (1972 Croston, JD. 1972. Stock levels for slow-moving items. Op. Res. Q., 25(1): 123130. [Taylor &; Francis Online] [Google Scholar]), with or without modifications, has been the preference for forecasting lumpy demand. Nevertheless, Croston (1974 Croston, JD. 1974. Forecasting and stock control for intermittent demand. Op. Res. Q., 23(3): 289303. [Taylor &; Francis Online] [Google Scholar]) and others, such as Venkitachalam et al . (2002 Venkitachalam, GHK, Pratt, DB, Young, CF, Morris, S and Goldstein, ML. 2002. Forecasting and inventory planning for parts with intermittent demand–a case study [http://anaconda.ecen.okstate.edu/publications/ERC2003-ADF-Final.doc] [Google Scholar]), have suggested the use of zero forecasts when the demand contains many zeros. In this paper, we put to the test this idea by doing a full factorial study comparing five forecasting methods, including all-zero, under several levels of demand lumpiness, demand variation and ordering, holding and shortage cost. We evaluate the forecasting methods by three measures of forecast error and two measures of inventory cost. We find that all-zero forecasts yield the lowest cost when lumpiness is high; is it also best for mid-lumpiness, if the shortage cost is much higher than the holding cost. We also find that the lowest forecasting error does not necessarily lead to the lowest system cost. And contrary to the assertions in Chen et al . (2000b Chen, F, Ryan, JK and Simchi-Levi, D. 2000b. The impact of exponential smoothing forecasts on the bullwhip effect. Naval Research Logistics, 42: 269286.  [Google Scholar]) and Dejonckheere et al . (2003 Dejonckheere, J, Disney, SM, Lambrecht, MR and Towill, DR. 2003. Measuring the Bullwhip Effect: a Control Theoretic Approach to Analyse Forecasting Induced Bullwhip in Order-Up-To Policies. Eur. J. Op. Res., 147(3): 567590. [Crossref], [Web of Science ®] [Google Scholar], 2004 Dejonckheere, J, Disney, SM, Lambrecht, MR and Towill, DR. 2004. The impact of information enrichment on the bullwhip effect in supply chains: a control theoretic approach. Eur. J. Op. Res., 153(3): 727750. [Crossref], [Web of Science ®] [Google Scholar]), our factorial experiment reinforces the intuition that simple exponential smoothing is superior to an equivalent moving average.  相似文献   

10.
It is increasingly recognized that many industrial and engineering experiments use split-plot or other multi-stratum structures. Much recent work has concentrated on finding optimum, or near-optimum, designs for estimating the fixed effects parameters in multi-stratum designs. However, often inference, such as hypothesis testing or interval estimation, will also be required and for inference to be unbiased in the presence of model uncertainty requires pure error estimates of the variance components. Most optimal designs provide few, if any, pure error degrees of freedom. Gilmour and Trinca (2012 Gilmour, S. G., and Trinca, L. A. (2012), “Optimum Design of Experiments for Statistical Inference” (with discussion), Applied Statistics, 61, 345401.[Crossref], [Web of Science ®] [Google Scholar]) introduced design optimality criteria for inference in the context of completely randomized and block designs. Here these criteria are used stratum-by-stratum to obtain multi-stratum designs. It is shown that these designs have better properties for performing inference than standard optimum designs. Compound criteria, which combine the inference criteria with traditional point estimation criteria, are also used and the designs obtained are shown to compromise between point estimation and inference. Designs are obtained for two real split-plot experiments and an illustrative split–split-plot structure. Supplementary materials for this article are available online.  相似文献   

11.
A strategy is presented to obtain production sequences resulting in minimal tooling replacements. An objective function is employed to distribute the tool wear as evenly as possible throughout the sequence. This objective function is an extension of Miltenburg's earlier work (1989 Miltenburg, J. 1989. Level schedules for mixed-model assembly lines in just-in-time production systems. Manage. Sci., 35: 192207. [Crossref], [Web of Science ®] [Google Scholar]) concerned with obtaining production sequences while evenly distributing the satisfaction of demand. Smaller problems are solved to optimality, while larger problems are solved as close as possible to optimality. The production sequences are simulated to estimate required tooling replacements. The methodology presented here consistently results in fewer tooling replacements when compared with earlier published work (McMullen et al. 2002 McMullen, PR, Clark, M, Bell, J and Albritton, D. 2002. A correlation and heuristic approach to production sequences with uniformity of tool wear. Comp.?&;?Op. Res., 30: 435454.  [Google Scholar], McMullen 2003 McMullen, PR. 2003. Sequencing for minimal tooling replacements via a variety of objective functions. Int. J. Prod. Res., 41: 21832199. [Taylor &; Francis Online], [Web of Science ®] [Google Scholar]).  相似文献   

12.
Partial least squares (PLS) is a widely used method for prediction in applied statistics, especially in chemometrics applications. However, PLS is not invariant or equivariant under scale transformations of the predictors, which tends to limit its scope to regressions in which the predictors are measured in the same or similar units. Cook, Helland, and Su (2013 Cook, R.D., Helland, I.S., Su, Z. (2013), Envelopes and Partial Least Squares Regression, Journal of the Royal Statistical Society, Series B,75, 851877.[Crossref] [Google Scholar]) built a connection between nascent envelope methodology and PLS, allowing PLS to be addressed in a traditional likelihood-based framework. In this article, we use the connection between PLS and envelopes to develop a new method—scaled predictor envelopes (SPE)—that incorporates predictor scaling into PLS-type applications. By estimating the appropriate scales, the SPE estimators can offer efficiency gains beyond those given by PLS, and further reduce prediction errors. Simulations and an example are given to support the theoretical claims.  相似文献   

13.
In this study, fluidized bed drying experiments were conducted for poplar wood particles (Populus deltoides) at temperatures ranging from 90°C to 120°C and air velocities ranging from 2.8 m s?1 to 3.3 m s?1. The initial moisture content (MC) and the bed height of the poplar wood particles were 150% (on an oven-dry basis) and 2 cm, respectively. The results showed that the drying rate increased by increasing the drying temperature and air velocity. The constant drying rate period was only observed at the early stages of the drying process and most of the drying processes were found in the falling rate period. The experimental data of the drying process were put into e11 models. Among these models, Midilli, Kucuk, and Yapar (2002 Midilli, A., H. Kucuk, and Z. Yapar. 2002. A new model for single-layer drying. Drying Technology 20:150313. doi:10.1081/DRT-120005864[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and Henderson and Pabis (1961 Henderson, S. M., and S. Pabis. 1961. Grain drying theory I. Temperature effect on drying coefficient. Journal of Agriculture Engineering Research 6 (3):16974. [Google Scholar]) were found to satisfactorily describe the drying characteristics of poplar wood particles. The effective moisture diffusivity of wood particles increased from 7E-6 to 8.46E-6 and 7.65 E-6 to 1.44E-5 m2 s?1 as the drying air temperature increased from 90°C to 120°C for 2.8 m s?1 and 3.3 m s?1 of velocities, respectively. Also, the activation energies of diffusion were 34.08 kJ mol?1 and 64.70 kJ mol?1 for the air velocities of 2.8 m s?1 and 3.3 m s?1, respectively.  相似文献   

14.
The in-control average run-length (ICARL) is often the metric used to design and implement a control chart in practice. To this end, the ICARL robustness of a control chart, that is, how well the chart maintains its advertised nominal ICARL value, under violations of the underlying assumptions, is crucial. Without the ICARL robustness, the shift detection property of the chart becomes questionable. In this article, first, the ICARL robustness of the well-known adaptive exponentially weighted moving average (AEWMA) chart of Capizzi and Masarotto (2003 Capizzi, G., and G. Masarotto. 2003. An adaptive exponentially weighted moving average control chart. Technometrics 45 (3):199207.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) is examined, in an extensive simulation study, with respect to the underlying assumption of normality. The ICARL profiles of the AEWMA chart are calculated for a range of distributions of various shapes, including light-tailed, heavy-tailed, symmetric, and skewed. Our results show that the AEWMA chart is quite sensitive to the normality (shape) assumption and may not maintain the nominal ICARL under non-normality. Motivated by this, a distribution-free (nonparametric) analog of the AEWMA chart (called the NPAEWMA chart), based on the Wilcoxon rank sum statistic, is proposed for Phase I applications when a Phase I reference sample is available. The NPAEWMA chart shows good ICARL-robustness against non-normality and shift detection capacity.  相似文献   

15.
In this paper, the problem of minimising maximum completion time on a single batch processing machine is studied. A batch processing is performed on a machine which can simultaneously process several jobs as a batch. The processing time of a batch is determined by the longest processing time of jobs in the batch. The batch processing machine problem is encountered in many manufacturing systems such as burn-in operations in the semiconductor industry and heat treatment operations in the metalworking industries. Heuristics are developed by iterative decomposition of a mixed integer programming model, modified from the successive knapsack problem by Ghazvini and Dupont (1998 Ghazvini, F.J. and Dupont, L. 1998. Minimising mean flow times criteria on a single batch processing machine with non-identical jobs sizes. International Journal of Production Economics, 55: 273280. [Crossref], [Web of Science ®] [Google Scholar], Minimising mean flow times criteria on a single batch processing machine with non-identical jobs sizes. International Journal of Production Economics 55: 273–280) and the waste of batch clustering algorithm by Chen, Du, and Huang (2011 Chen, H., Du, B. and Huang, G.Q. 2011. Scheduling a batch processing machine with non-identical job sizes: a clustering perspective. International Journal of Production Research, 49(19): 57555778. [Taylor &; Francis Online], [Web of Science ®] [Google Scholar], Scheduling a batch processing machine with non-identical job sizes: a clustering perspective. International Journal of Production Research 49 (19): 5755–5778). Experimental results show that the suggested heuristics produce high-quality solutions comparable to those of previous heuristics in a reasonable computation time.  相似文献   

16.
Since their introduction by Jones and Nachtsheim in 2011 Jones, B., and Nachtsheim, C. J. (2011), “A Class of Three-Level Designs for Definitive Screening in the Presence of Second-Order Effects,” Journal of Quality Technology, 43, 115.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], definitive screening designs (DSDs) have seen application in fields as diverse as bio-manufacturing, green energy production, and laser etching. One barrier to their routine adoption for screening is due to the difficulties practitioners experience in model selection when both main effects and second-order effects are active. Jones and Nachtsheim showed that for six or more factors, DSDs project to designs in any three factors that can fit a full quadratic model. In addition, they showed that DSDs have high power for detecting all the main effects as well as one two-factor interaction or one quadratic effect as long as the true effects are much larger than the error standard deviation. However, simulation studies of model selection strategies applied to DSDs can disappoint by failing to identify the correct set of active second-order effects when there are more than a few such effects. Standard model selection strategies such as stepwise regression, all-subsets regression, and the Dantzig selector are general tools that do not make use of any structural information about the design. It seems reasonable that a modeling approach that makes use of the known structure of a designed experiment could perform better than more general purpose strategies. This article shows how to take advantage of the special structure of the DSD to obtain the most clear-cut analytical results possible.  相似文献   

17.
Rajagopalan and Irani (Some comments on Malakooti et al. ‘Integrated group technology, cell formation, process planning, and production planning with application to the emergency room’. Int. J. Prod. Res., 2006, 44, 2265--2276.) provide a critique of Malakooti et al. (Integrated group technology, cell formation, process and production planning with application to the emergency room. Int. J. Prod. Res., 2004, 42, 1769–1786.) integrated cell/process/capacity formation (ICPCF) approach and suggest an improved method for solving the ICPCF problem. Rajagopalan and Irani (2006 Rajagopalan, RIrani, SA. 2006. Integrated group technology, cell formation, process planning, and production planning with application to the emergency room. Int. J. Prod. Res., 44: 22652276. Some comments on Malakooti[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) attempt to solve the emergency room layout problem presented in Malakooti et al. (2004) and claim to have obtained an improved solution from their approach (hybrid flowshop layout). Although there are certain advantages of considering Rajagopalan and Irani's (2006 Rajagopalan, RIrani, SA. 2006. Integrated group technology, cell formation, process planning, and production planning with application to the emergency room. Int. J. Prod. Res., 44: 22652276. Some comments on Malakooti[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) approach, we believe that their approach for solving ICPCF problems have significant shortcomings.  相似文献   

18.
19.
Following on from the work of Anabtawi et al. (2003) Anabtawi, M. Z., Hilal, N. and Al Muftah, A. E. 2003. Volumetric mass transfer coefficient in non-Newtonian fluids in spouted-fluid beds: Part I. Chem. Eng. Technol., 26: 759764.  [Google Scholar], this study examined how the volumetric liquid-phase mass transfer coefficient, k L a, of oxygen in air in three-phase spout-fluid beds was affected by varying the system parameters of bed height, bed diameter, gas velocity, and liquid velocity. The liquid used was 0.1% CMC solution, displaying a pseudo-plastic rheology, with 1.75 mm glass spheres as packing. The values of the Sherwood number were lower than in previous studies (Anabtawi et al., 2003 Anabtawi, M. Z., Hilal, N. and Al Muftah, A. E. 2003. Volumetric mass transfer coefficient in non-Newtonian fluids in spouted-fluid beds: Part I. Chem. Eng. Technol., 26: 759764.  [Google Scholar]), in the range 9,000–186,000. Gas velocity had a similar effect on k L a as in a bubble column, with results also giving good agreement with previous work on two-phase and three-phase spouted bed systems. The correlation obtained for the effect of liquid velocity on k L a compared well with that of Schumpe et al. (1989) Schumpe, A., Deckwer, W. and Nigam, K. D. P. 1989. Gas-liquid mass transfer in three- phase fluidized beds with viscous pseudoplastic liquids. Can. J. Chem. Eng., 67: 873877. [Crossref] [Google Scholar]. An increase in the height of packing increased k L a to the power of 0.319, with an increase in column diameter also causing an increase in k L a, which is in agreement with the results of Akita and Yoshida (1973) Akita, K. and Yoshida, F. 1973. Gas hold-up and volumetric mass transfer coefficient in bubble columns. Ind. Eng. Chem. Process Des. Dev., 12: 7680. [Crossref] [Google Scholar].  相似文献   

20.
The main purpose of this corrigendum is to indicate and rectify the same mistakes made by Schrady (1967 Schrady, D. A. 1967. “A Deterministic Inventory Model for Repairable Items.” Naval Research Logistics Quarterly 14: 391398.[Crossref] [Google Scholar]), Nahmias and Rivera (1979 Nahmias, S., and H. Rivera. 1979. “A Deterministic Model for a Repairable Item Inventory System with a Finite Repair Rate.” International Journal of Production Research 17: 215221.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and Teunter (2004 Teunter, R. H. 2004. “Lot-Sizing for Inventory Systems with Product Recovery.” Computers and Industrial Engineering 46: 431441.[Crossref], [Web of Science ®] [Google Scholar]) in the course of solving their respective models in order that subsequent researchers will not follow the same. To this end, we derive the corresponding correct global-optimal formulae for the substitution-policy model (1,?n), with infinite or finite recovery (or called repair) rate, using differential calculus, as well as providing a closed-form expression to identify the optimal positive integral value of n recovery set-ups. In addition, we also rectify the formulae and solution procedure for numerically solving the constrained non-linear programme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号