共查询到20条相似文献,搜索用时 46 毫秒
1.
Takashi Shimomura 《Dynamical Systems: An International Journal》2018,33(2):275-302
Downarowicz and Maass [7] proposed topological ranks for all homeomorphic Cantor minimal dynamical systems using properly ordered Bratteli diagrams. In this study, we adopt this definition to the case of the essentially minimal zero-dimensional systems. We consider the cases in which topological ranks are 2 and unique minimal sets are fixed points. Akin and Kolyada [2], had shown that if the unique minimal set of an essentially minimal system is a fixed point, then the system must be proximal. The finite topological rank implies expansiveness; furthermore, in the case of proximal Cantor systems with topological rank 2, the expansiveness is always from the lowest degree. Rank 2 proximal Cantor systems are residually scrambled. We present a necessary and sufficient condition for the unique ergodicity of these systems. In addition, we show that the number of ergodic measures of the systems that are topologically mixing can be 1 and 2. Moreover, we present examples that are topologically weakly mixing, not topologically mixing, and uniquely ergodic. Finally, we show that the number of ergodic measures of the systems that are not weakly mixing can be 1 and 2. 相似文献
2.
Two constitutive models representative of two well-known modeling techniques for superelastic shape-memory wires are reviewed. The first model has been proposed by Kim and Aberayatne in the framework of finite thermo-elasticity with non-convex energy [1]. In the present article this model has been modified in order to take into account the difference between elastic moduli of austenite and martensite and to introduce the isothermal approximation proposed in [1]. The second model has been developed by Auricchio et al. within the theory of irreversible thermodynamics with internal variables [2]. Both models are temperature and strain rate dependent and they take into account thermal effects. The focus in this article is on investigating how the two models compare with experimental data obtained from testing superelastic NiTi wires used in the design of a prototypal anti-seismic device [3, 4]. After model calibration and numerical implementation, numerical simulations based on the two models are compared with data obtained from uniaxial tensile tests performed at two different temperatures and various strain rates. 相似文献
3.
Kamlesh Jangid 《International Journal for Computational Methods in Engineering Science and Mechanics》2018,19(2):129-137
Extension of the PS model (Gao et al. [1]) in piezoelectric materials and the SEMPS model (Fan and Zhao [2]) in MEE materials, is proposed for two semi-permeable cracks in a MEE medium. It is assumed that the magnetic yielding occurs at the continuation of the cracks due to the prescribed loads. We have model these crack continuations as the zones with cohesive saturation limit magnetic induction. Stroh's formalism and complex variable techniques are used to formulate the problem. Closed form analytical expressions are derived for various fracture parameters. A numerical case study is presented for BaTiO3 ? CoFe2O4 ceramic cracked plate. 相似文献
4.
Yingxin Guo 《Dynamical Systems: An International Journal》2017,32(4):490-503
In this paper, the exponential stability of travelling waves solutions for nonlinear cellular neural networks with distribute delays in the lattice is studied. The weighted energy method and comparison principle are employed to derive the sufficient conditions under which the networks proposed are exponentially stable. Following the study [13] on the existence of the travelling wave solutions in nonlinear delayed cellular neural networks, this paper is focused on the exponential stability of these travelling wave solutions. 相似文献
5.
In this contribution, we present a novel polygonal finite element method applied to hyperelastic analysis. For generating polygonal meshes in a bounded period of time, we use the adaptive Delaunay tessellation (ADT) proposed by Constantinu et al. [1]. ADT is an unstructured hybrid tessellation of a scattered point set that minimally covers the proximal space around each point. In this work, we have extended the ADT to nonconvex domains using concepts from constrained Delaunay triangulation (CDT). The proposed method is thus based on a constrained adaptive Delaunay tessellation (CADT) for the discretization of domains into polygonal regions. We involve the metric coordinate (Malsch) method for obtaining the interpolation over convex and nonconvex domains. For the numerical integration of the Galerkin weak form, we resort to classical Gaussian quadrature based on triangles. Numerical examples of two-dimensional hyperelasticity are considered to demonstrate the advantages of the polygonal finite element method. 相似文献
6.
Expectile, first introduced by Newey and Powell in 1987 in the econometrics literature, has recently become increasingly popular in risk management and capital allocation for financial institutions due to its desirable properties such as coherence and elicitability. The current standard tool for expectile regression analysis is the multiple linear expectile regression proposed by Newey and Powell in 1987. The growing applications of expectile regression motivate us to develop a much more flexible nonparametric multiple expectile regression in a reproducing kernel Hilbert space. The resulting estimator is called KERE, which has multiple advantages over the classical multiple linear expectile regression by incorporating nonlinearity, nonadditivity, and complex interactions in the final estimator. The kernel learning theory of KERE is established. We develop an efficient algorithm inspired by majorization-minimization principle for solving the entire solution path of KERE. It is shown that the algorithm converges at least at a linear rate. Extensive simulations are conducted to show the very competitive finite sample performance of KERE. We further demonstrate the application of KERE by using personal computer price data. Supplementary materials for this article are available online. 相似文献
7.
8.
K. M. Assefa 《Particulate Science and Technology》2017,35(1):77-85
Bench scale tests were carried out to investigate the rheological properties of multi-sized particulate Bingham slurries at high solid concentrations ranging from 50% to 70% by weight. In addition, rheological data from Biswas et al. (2000) and Chandel et al. (2009, 2010) have also been considered. Based on these extensive amount of rheological data, an empirical model is proposed for viscosity as a function of solid volume fraction (?), maximum solid volume fraction (?m), median particle diameter (d50), and coefficient of uniformity (Cu) using optimization and nonlinear least-square curve-fitting technique. The proposed model shows good agreement with the experimental data considered in the present study and is found to be much better than the previously developed models in predicting the viscosity of multi-sized particulate Bingham slurries at high solid concentrations. 相似文献
9.
Dean C. Chatfield 《国际生产研究杂志》2013,51(4):935-950
Lumpy demand is a phenomenon encountered in manufacturing or retailing when the items are slow-moving or too expensive, for example fighter plane engines. So far, the seminal procedure of Croston's (1972), with or without modifications, has been the preference for forecasting lumpy demand. Nevertheless, Croston (1974) and others, such as Venkitachalam et al . (2002), have suggested the use of zero forecasts when the demand contains many zeros. In this paper, we put to the test this idea by doing a full factorial study comparing five forecasting methods, including all-zero, under several levels of demand lumpiness, demand variation and ordering, holding and shortage cost. We evaluate the forecasting methods by three measures of forecast error and two measures of inventory cost. We find that all-zero forecasts yield the lowest cost when lumpiness is high; is it also best for mid-lumpiness, if the shortage cost is much higher than the holding cost. We also find that the lowest forecasting error does not necessarily lead to the lowest system cost. And contrary to the assertions in Chen et al . (2000b) and Dejonckheere et al . (2003, 2004), our factorial experiment reinforces the intuition that simple exponential smoothing is superior to an equivalent moving average. 相似文献
10.
It is increasingly recognized that many industrial and engineering experiments use split-plot or other multi-stratum structures. Much recent work has concentrated on finding optimum, or near-optimum, designs for estimating the fixed effects parameters in multi-stratum designs. However, often inference, such as hypothesis testing or interval estimation, will also be required and for inference to be unbiased in the presence of model uncertainty requires pure error estimates of the variance components. Most optimal designs provide few, if any, pure error degrees of freedom. Gilmour and Trinca (2012) introduced design optimality criteria for inference in the context of completely randomized and block designs. Here these criteria are used stratum-by-stratum to obtain multi-stratum designs. It is shown that these designs have better properties for performing inference than standard optimum designs. Compound criteria, which combine the inference criteria with traditional point estimation criteria, are also used and the designs obtained are shown to compromise between point estimation and inference. Designs are obtained for two real split-plot experiments and an illustrative split–split-plot structure. Supplementary materials for this article are available online. 相似文献
11.
P. R. McMullen 《国际生产研究杂志》2013,51(12):2465-2478
A strategy is presented to obtain production sequences resulting in minimal tooling replacements. An objective function is employed to distribute the tool wear as evenly as possible throughout the sequence. This objective function is an extension of Miltenburg's earlier work (1989) concerned with obtaining production sequences while evenly distributing the satisfaction of demand. Smaller problems are solved to optimality, while larger problems are solved as close as possible to optimality. The production sequences are simulated to estimate required tooling replacements. The methodology presented here consistently results in fewer tooling replacements when compared with earlier published work (McMullen et al. 2002, McMullen 2003). 相似文献
12.
Partial least squares (PLS) is a widely used method for prediction in applied statistics, especially in chemometrics applications. However, PLS is not invariant or equivariant under scale transformations of the predictors, which tends to limit its scope to regressions in which the predictors are measured in the same or similar units. Cook, Helland, and Su (2013) built a connection between nascent envelope methodology and PLS, allowing PLS to be addressed in a traditional likelihood-based framework. In this article, we use the connection between PLS and envelopes to develop a new method—scaled predictor envelopes (SPE)—that incorporates predictor scaling into PLS-type applications. By estimating the appropriate scales, the SPE estimators can offer efficiency gains beyond those given by PLS, and further reduce prediction errors. Simulations and an example are given to support the theoretical claims. 相似文献
13.
Mohammad Arabi Mohammad Mehdi Faezipour Mohammad Layeghi Majid Khanali Hamid Zareahosseinabadi 《Particulate Science and Technology》2017,35(6):723-730
In this study, fluidized bed drying experiments were conducted for poplar wood particles (Populus deltoides) at temperatures ranging from 90°C to 120°C and air velocities ranging from 2.8 m s?1 to 3.3 m s?1. The initial moisture content (MC) and the bed height of the poplar wood particles were 150% (on an oven-dry basis) and 2 cm, respectively. The results showed that the drying rate increased by increasing the drying temperature and air velocity. The constant drying rate period was only observed at the early stages of the drying process and most of the drying processes were found in the falling rate period. The experimental data of the drying process were put into e11 models. Among these models, Midilli, Kucuk, and Yapar (2002) and Henderson and Pabis (1961) were found to satisfactorily describe the drying characteristics of poplar wood particles. The effective moisture diffusivity of wood particles increased from 7E-6 to 8.46E-6 and 7.65 E-6 to 1.44E-5 m2 s?1 as the drying air temperature increased from 90°C to 120°C for 2.8 m s?1 and 3.3 m s?1 of velocities, respectively. Also, the activation energies of diffusion were 34.08 kJ mol?1 and 64.70 kJ mol?1 for the air velocities of 2.8 m s?1 and 3.3 m s?1, respectively. 相似文献
14.
R. Zheng 《Quality Engineering》2016,28(4):476-490
The in-control average run-length (ICARL) is often the metric used to design and implement a control chart in practice. To this end, the ICARL robustness of a control chart, that is, how well the chart maintains its advertised nominal ICARL value, under violations of the underlying assumptions, is crucial. Without the ICARL robustness, the shift detection property of the chart becomes questionable. In this article, first, the ICARL robustness of the well-known adaptive exponentially weighted moving average (AEWMA) chart of Capizzi and Masarotto (2003) is examined, in an extensive simulation study, with respect to the underlying assumption of normality. The ICARL profiles of the AEWMA chart are calculated for a range of distributions of various shapes, including light-tailed, heavy-tailed, symmetric, and skewed. Our results show that the AEWMA chart is quite sensitive to the normality (shape) assumption and may not maintain the nominal ICARL under non-normality. Motivated by this, a distribution-free (nonparametric) analog of the AEWMA chart (called the NPAEWMA chart), based on the Wilcoxon rank sum statistic, is proposed for Phase I applications when a Phase I reference sample is available. The NPAEWMA chart shows good ICARL-robustness against non-normality and shift detection capacity. 相似文献
15.
In this paper, the problem of minimising maximum completion time on a single batch processing machine is studied. A batch processing is performed on a machine which can simultaneously process several jobs as a batch. The processing time of a batch is determined by the longest processing time of jobs in the batch. The batch processing machine problem is encountered in many manufacturing systems such as burn-in operations in the semiconductor industry and heat treatment operations in the metalworking industries. Heuristics are developed by iterative decomposition of a mixed integer programming model, modified from the successive knapsack problem by Ghazvini and Dupont (1998, Minimising mean flow times criteria on a single batch processing machine with non-identical jobs sizes. International Journal of Production Economics 55: 273–280) and the waste of batch clustering algorithm by Chen, Du, and Huang (2011, Scheduling a batch processing machine with non-identical job sizes: a clustering perspective. International Journal of Production Research 49 (19): 5755–5778). Experimental results show that the suggested heuristics produce high-quality solutions comparable to those of previous heuristics in a reasonable computation time. 相似文献
16.
Since their introduction by Jones and Nachtsheim in 2011, definitive screening designs (DSDs) have seen application in fields as diverse as bio-manufacturing, green energy production, and laser etching. One barrier to their routine adoption for screening is due to the difficulties practitioners experience in model selection when both main effects and second-order effects are active. Jones and Nachtsheim showed that for six or more factors, DSDs project to designs in any three factors that can fit a full quadratic model. In addition, they showed that DSDs have high power for detecting all the main effects as well as one two-factor interaction or one quadratic effect as long as the true effects are much larger than the error standard deviation. However, simulation studies of model selection strategies applied to DSDs can disappoint by failing to identify the correct set of active second-order effects when there are more than a few such effects. Standard model selection strategies such as stepwise regression, all-subsets regression, and the Dantzig selector are general tools that do not make use of any structural information about the design. It seems reasonable that a modeling approach that makes use of the known structure of a designed experiment could perform better than more general purpose strategies. This article shows how to take advantage of the special structure of the DSD to obtain the most clear-cut analytical results possible. 相似文献
17.
Rajagopalan and Irani (Some comments on Malakooti et al. ‘Integrated group technology, cell formation, process planning, and production planning with application to the emergency room’. Int. J. Prod. Res., 2006, 44, 2265--2276.) provide a critique of Malakooti et al. (Integrated group technology, cell formation, process and production planning with application to the emergency room. Int. J. Prod. Res., 2004, 42, 1769–1786.) integrated cell/process/capacity formation (ICPCF) approach and suggest an improved method for solving the ICPCF problem. Rajagopalan and Irani (2006) attempt to solve the emergency room layout problem presented in Malakooti et al. (2004) and claim to have obtained an improved solution from their approach (hybrid flowshop layout). Although there are certain advantages of considering Rajagopalan and Irani's (2006) approach, we believe that their approach for solving ICPCF problems have significant shortcomings. 相似文献
18.
19.
M. Z. ANABTAWI N. HILAL A. E. MUFTAH M. C. LEAPER 《Particulate Science and Technology》2013,31(4):391-403
Following on from the work of Anabtawi et al. (2003), this study examined how the volumetric liquid-phase mass transfer coefficient, k L a, of oxygen in air in three-phase spout-fluid beds was affected by varying the system parameters of bed height, bed diameter, gas velocity, and liquid velocity. The liquid used was 0.1% CMC solution, displaying a pseudo-plastic rheology, with 1.75 mm glass spheres as packing. The values of the Sherwood number were lower than in previous studies (Anabtawi et al., 2003), in the range 9,000–186,000. Gas velocity had a similar effect on k L a as in a bubble column, with results also giving good agreement with previous work on two-phase and three-phase spouted bed systems. The correlation obtained for the effect of liquid velocity on k L a compared well with that of Schumpe et al. (1989). An increase in the height of packing increased k L a to the power of 0.319, with an increase in column diameter also causing an increase in k L a, which is in agreement with the results of Akita and Yoshida (1973). 相似文献
20.
Kit-Nam Francis Leung 《国际生产研究杂志》2013,51(1):66-71
The main purpose of this corrigendum is to indicate and rectify the same mistakes made by Schrady (1967), Nahmias and Rivera (1979), and Teunter (2004) in the course of solving their respective models in order that subsequent researchers will not follow the same. To this end, we derive the corresponding correct global-optimal formulae for the substitution-policy model (1,?n), with infinite or finite recovery (or called repair) rate, using differential calculus, as well as providing a closed-form expression to identify the optimal positive integral value of n recovery set-ups. In addition, we also rectify the formulae and solution procedure for numerically solving the constrained non-linear programme. 相似文献