首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A key problem, when using dynamic codes to run the static or quasi‐static jobs, in particular, for a stamping process simulation attaching spring‐back calculation, is the dynamic effect. In order to minimize the dynamic effects, some kinds of loading algorithms should be performed. The enhancement of these is proposed in this paper, as well as making up for a deficiency that current damping models have when used to reduce the dynamic effects. Essential conditions that can be followed to generate an ideal loading curve are brought forward here including the starting and ending condition. A concerned model is recommended that satisfies all ideal requirements, which can be used directly for the stamping process simulation. Associated criteria and how to optimize the loading curves are also presented. Finally, a simple case has been done to describe its practical application in simulations. Meanwhile, several kinds of loading curves, which have been used commonly in the stamping process simulation, are selected to compare with the proposed loading curve. The different features of them are also discussed. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

2.
Second‐order experimental designs are employed when an experimenter wishes to fit a second‐order model to account for response curvature over the region of interest. Partition designs are utilized when the output quality or performance characteristics of a product depend not only on the effect of the factors in the current process, but the effects of factors from preceding processes. Standard experimental design methods are often difficult to apply to several sequential processes. We present an approach to building second‐order response models for sequential processes with several design factors and multiple responses. The proposed design expands current experimental designs to incorporate two processes into one partitioned design. Potential advantages include a reduction in the time required to execute the experiment, a decrease in the number of experimental runs, and improved understanding of the process variables and their influence on the responses. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

3.
This paper considers an experimentation strategy when resource constraints permit only a single design replicate per time interval and one or more design variables are hard to change. The experimental designs considered are two‐level full‐factorial or fractional‐factorial designs run as balanced split plots. These designs are common in practice and appropriate for fitting a main‐effects‐plus‐interactions model, while minimizing the number of times the whole‐plot treatment combination is changed. Depending on the postulated model, single replicates of these designs can result in the inability to estimate error at the whole‐plot level, suggesting that formal statistical hypothesis testing on the whole‐plot effects is not possible. We refer to these designs as balanced two‐level whole‐plot saturated split‐plot designs. In this paper, we show that, for these designs, it is appropriate to use ordinary least squares to analyze the subplot factor effects at the ‘intermittent’ stage of the experiments (i.e., after a single design replicate is run); however, formal inference on the whole‐plot effects may or may not be possible at this point. We exploit the sensitivity of ordinary least squares in detecting whole‐plot effects in a split‐plot design and propose a data‐based strategy for determining whether to run an additional replicate following the intermittent analysis or whether to simply reduce the model at the whole‐plot level to facilitate testing. The performance of the proposed strategy is assessed using Monte Carlo simulation. The method is then illustrated using wind tunnel test data obtained from a NASCAR Winston Cup Chevrolet Monte Carlo stock car. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
Mixture experiments with the presence of process variables are commonly encountered in the manufacturing industry. The experimenter who plans to conduct mixture experiments in which a process involves the combination of machines, methods, and other resources will try to find condition of design factors which make the product/process insensitive or robust to the variability transmitted into the response variable. We propose the genetic algorithm (GA) for generating robust mixture‐process experimental designs involving control and noise variables. When the noise variables, which are extremely difficult to control or not routinely controlled during the manufacturing process and may change without warning, are considered in a mixture experiment, we propose the robust design setting. When considering a robust design, the design that has a lower and flatter faction of design space curves for all levels of the controllable process variables at varying noise interaction is preferable. We evaluate the designs with respect to these criteria for both the mean model and the slope model. The evaluation demonstrates that the proposed GA designs are robust to the contribution of the interactions involving the noise variables.  相似文献   

5.
The goal of our paper is to demonstrate the cost‐effective use of the Lanczos method for estimating the critical time step in an explicit, transient dynamics code. The Lanczos method can provide a significantly larger estimate for the critical time‐step than an element‐based method (the typical scheme). However, the Lanczos method represents a more expensive method for calculating a critical time‐step than element‐based methods. Our paper shows how the additional cost of the Lanczos method can be amortized over a number of time steps and lead to an overall decrease in run‐time for an explicit, transient dynamics code. We present an adaptive hybrid scheme that synthesizes the Lanczos‐based and element‐based estimates and allows us to run near the critical time‐step estimate provided by the Lanczos method. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

6.
Abstract

In this paper a self‐tuning run‐by‐run process controller is presented. The controller has the capability of choosing a control parameter dynamically in response to the underlying process disturbances. There are two modules in this controller: a self‐tuning loop trigger module and a run‐by‐run feedback control module. In the self‐tuning loop trigger module, two EWMA control charts are used sequentially to determine if there is a large or medium shift in the process output and to trigger a new self‐tuning loop accordingly. In the run‐by‐run feedback control module, the control parameter and control model are re‐tuned sequentially and a new process recipe is generated, on a run‐by‐run basis, to compensate for the process output's deviation from the target. Monte Carlo simulation results show that the self‐tuning run‐by‐run process controller is superior to the current run‐by‐run process controller with a fixed control parameter.  相似文献   

7.
The output quality or performance characteristics of a product often depend not only on the effect of the factors in the current process but on the effect of factors from preceding processes. Statistically‐designed experiments provide a systematic approach to study the effects of multiple factors on process performance by offering a structured set of analyses of data collected through a design matrix. One important limitation of experimental design methods is that they have not often been applied to multiple sequential processes. The objective is to create a first‐order experimental design for multiple sequential processes that possess several factors and multiple responses. The first‐order design expands the current experimental designs to incorporate two processes into one partitioned design. The designs are evaluated on the complexity of the alias structure and their orthogonality characteristics. The advantages include a decrease in the number of experimental design runs, a reduction in experiment execution time, and a better understanding of the overall process variables and their influence on each of the responses. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

8.
李明  吕振华 《工程力学》2017,34(9):239-247
基于三维流-固耦合有限元动力学仿真分析模型和直接耦合算法,分析了一种锥形节流阀在入口流速脉冲激励下由关闭状态开启而后重新关闭全过程的流量特性、压差特性及阀门开度的高频波动等非线性动力学响应特性,并采用小波分析方法等对阀门开度响应等进行了时-频域分析。选择不同的流体-结构模型的数值积分方法组合及时间步长对流-固耦合动力学求解算法进行了实际应用检验;然后对阀芯质量、弹簧参数与油液参数等系统参数以及激励速度幅值与脉宽等激励参数对其工作过程动力学响应的影响进行了细致的数值分析比较。结果表明:流体模型积分算法的选择对流-固耦合计算结果的影响较大;对该阀而言,阀芯质量与油液体积弹性模量的改变对阀芯振动频率的影响较为显著,油液粘度的改变对阀门开启的滞后量及振动相位的影响较大,而弹簧刚度及预紧力的改变对阀门的最大稳定开度的影响较大;阀芯与阀座间的碰撞使阀芯的振动频率提高。  相似文献   

9.
Knowing the time of changes in mean and variance in a process is crucial for engineers to identify the special cause quickly and correctly. Because assignable causes may give rise to changes in mean and variance at the same time, monitoring the mean and variance simultaneously is required. In this paper, a mixture likelihood approach is proposed to detect shifts in mean and variance simultaneously in a normal process. We first transfer the change point model formulation into a mixture model and then employ the expectation and maximization algorithm to estimate the time of shifts in mean and variance simultaneously. The proposed method called EMCP (expectation and maximization change point) can be used in both phase I and II applications without the knowledge of in‐control process parameters. Moreover, EMCP can detect the time of multiple shifts and simultaneously produce the estimates of shifts in each individual segment. Numerical data and real datasets are employed to compare EMCP with the direct statistical maximum likelihood method without the use of mixture models. The experimental results show the superiority and effectiveness of the proposed EMCP. The outperformance of EMCP in detecting the time of small shifts is particularly important and beneficial for engineers to identify assignable causes rapidly and accurately in phase II applications in which small shifts occur more often and hence lead to a large average run length. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Frequency sweeps in structural dynamics, acoustics, and vibro‐acoustics require evaluating frequency response functions for a large number of frequencies. The brute force approach for performing these sweeps leads to the solution of a large number of large‐scale systems of equations. Several methods have been developed for alleviating this computational burden by approximating the frequency response functions. Among these, interpolatory model order reduction methods are perhaps the most successful. This paper reviews this family of approximation methods with particular attention to their applicability to specific classes of frequency response problems and their performance. It also includes novel aspects pertaining to the iterative solution of large‐scale systems of equations in the context of model order reduction and frequency sweeps. All reviewed computational methods are illustrated with realistic, large‐scale structural dynamic, acoustic, and vibro‐acoustic analyses in wide frequency bands. These highlight both the potential of these methods for reducing CPU time and their limitations. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
When there are constraints on resources, an unreplicated factorial or fractional factorial design can allow efficient exploration of numerous factor and interaction effects. A half‐normal plot is a common graphical tool used to compare the relative magnitude of effects and to identify important effects from these experiments when no estimate of error from the experiment is available. An alternative is to use a least absolute shrinkage and selection operation plot to examine the pattern of model selection terms from an experiment. We examine how both the half‐normal and least absolute shrinkage and selection operation plots are impacted by the absence of individual observations or an outlier, and the robustness of conclusions obtained from these 2 techniques for identifying important effects from factorial experiments. The methods are illustrated with 2 examples from the literature.  相似文献   

12.
Recently, there has been interest in applying statistical process monitoring methods to processes controlled with feedback controllers in order to eliminate assignable causes and achieve reduced overall variability. In this paper, we propose a Bayesian change‐point method to monitor processes regulated with proportional‐integral controllers. The approach is based on fitting an exponential rise model to the control input actions in response to a step shift and employs a change‐point method to detect the change. Simulation studies show that the proposed method has better run‐length performance in detecting step shifts in controlled processes than Shewhart chart on individuals and special‐cause chart on residuals of time series model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
基于神经网络模型的动载荷识别   总被引:21,自引:0,他引:21  
依据结构动力学理论推导了在时域中用于神经网络算法的自回归函数,相应建立了具有时延反馈的神经网络动载荷识别模型。阐明了这种网络的基本学习算法和回忆算法。数值仿真和试验件的验证试验表明该神经网络模型用于动载荷识别时具有精度高、无累积误差、抗干扰能力强等优点,并且适用于各种类型的动载荷,尤其对冲击载荷的识别更具有独特的优势。该模型在动标学习过程中要求信息量小,试验成本低,是一种非常值得在工程中推广应用的新型动载荷识别方法。  相似文献   

14.
With nanometer lateral and Angstrom vertical resolution, atomic force microscopy (AFM) has contributed unique data improving the understanding of lipid bilayers. Lipid bilayers are found in several different temperature‐dependent states, termed phases; the main phases are solid and fluid phases. The transition temperature between solid and fluid phases is lipid composition specific. Under certain conditions some lipid bilayers adopt a so‐called ripple phase, a structure where solid and fluid phase domains alternate with constant periodicity. Because of its narrow regime of existence and heterogeneity ripple phase and its transition dynamics remain poorly understood. Here, a temperature control device to high‐speed atomic force microscopy (HS‐AFM) to observe dynamics of phase transition from ripple phase to fluid phase reversibly in real time is developed and integrated. Based on HS‐AFM imaging, the phase transition processes from ripple phase to fluid phase and from ripple phase to metastable ripple phase to fluid phase could be reversibly, phenomenologically, and quantitatively studied. The results here show phase transition hysteresis in fast cooling and heating processes, while both melting and condensation occur at 24.15 °C in quasi‐steady state situation. A second metastable ripple phase with larger periodicity is formed at the ripple phase to fluid phase transition when the buffer contains Ca2+. The presented temperature‐controlled HS‐AFM is a new unique experimental system to observe dynamics of temperature‐sensitive processes at the nanoscopic level.  相似文献   

15.
This paper deals with a proposed approach to estimate the dynamic characteristics of road vehicles using only on‐the‐road response data. Previous work undertaken by the authors was ultimately unable to be validated as the actual pavement elevation profile traversed by the single‐wheeled, idealized vehicle was unknown. In order to address this, the single‐wheeled vehicle used by the authors was instrumented to operate as an inertial profilometer in order to measure the actual longitudinal pavement elevation profile to establish the transmissibility frequency response function of the vehicle during operation. The on‐the‐road transmissibility frequency response functions were compared with those established in the laboratory (using a large‐scale vibration table) and found to significantly differ, particularly in the level of damping of the sprung mass mode (body) of the vehicle. The unsprung mass mode (axle‐hop) resonant frequency is also consistently observed to shift to a lower frequency during operation (on‐the‐road). Two approaches are outlined to estimate the spectral exponent of the assumed pavement elevation spectral model but were found to yield inaccurate results because of the variation in the pavement profiles travelled by the vehicle during each experimental run. The in‐built profilometer revealed that it is incorrect to use an assumed spectral model for the excitation as it is unable to accurately represent the actual pavement spectrum travelled over by the vehicle. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Conventional dynamic experiments on rubbers have several limitations including low signal‐to‐noise ratio and a long time period during which the specimen is not in static equilibrium, which causes difficulties separating constitutive material behaviour from specimen response. In order to overcome these limitations, we build on previous research in which the Virtual Fields Method (VFM) is applied to dynamic tensile experiments. A previous study has demonstrated that the VFM can be used to identify the material parameters of a hyperelastic model for a given rubber based on optical measurements of wave propagation in the rubber, eliminating the need for force measurements by instead using acceleration fields as a “virtual load cell.” In order for us to successfully characterise the strain hardening in the material, large deformations are required, and these were achieved by applying static preloads to the specimen before the dynamic loading. In order for us to then apply the VFM, measurements of the static force, or strain, or both, are required. This paper explores different methods for applying the VFM, in particular, comparing the use of a static force measurement, as in the previous research, to methods that only require strain fields in order to apply the incremental equation of motion. Finite element method simulations were conducted to compare the identification sensitivity to experimental error sources between the 2 VFM implementations; the experimental data used in the previous studies were then applied to the incremental VFM. A further experimental comparison is provided between constitutive parameters obtained in tensile experiments using the VFM and compressive measurements from a modified split Hopkinson bar technique equipped with a piezoelectric force transducer. Finally, there is a discussion of the effects of preloading and relaxation in the material.  相似文献   

17.
In modern industries, advanced imaging technology has been more and more invested to cope with the ever‐increasing complexity of systems, to improve the visibility of information and enhance operational quality and integrity. As a result, large amounts of imaging data are readily available. This presents great challenges on the state‐of‐the‐art practices in process monitoring and quality control. Conventional statistical process control (SPC) focuses on key characteristics of the product or process and is rather limited to handle complex structures of high‐dimensional imaging data. New SPC methods and tools are urgently needed to extract useful information from in situ image profiles for process monitoring and quality control. In this study, we developed a novel dynamic network scheme to represent, model, and control time‐varying image profiles. Potts model Hamiltonian approach is introduced to characterize community patterns and organizational behaviors in the dynamic network. Further, new statistics are extracted from network communities to characterize and quantify dynamic structures of image profiles. Finally, we design and develop a new control chart, namely, network‐generalized likelihood ratio chart, to detect the change point of the underlying dynamics of complex processes. The proposed methodology is implemented and evaluated for real‐world applications in ultraprecision machining and biomanufacturing processes. Experimental results show that the proposed approach effectively characterize and monitor the variations in complex structures of time‐varying image data. The new dynamic network SPC method is shown to have strong potentials for general applications in a diverse set of domains with in situ imaging data.  相似文献   

18.
Competing with successful products has become perplexing with several uncertainties and transmutes from time to time as customers’ expectations are
dynamic. That is why manufacturing firms exhaustively strive to look for a better competitive frontier using wellestablished and innovative product development (PD) processes. In this paper, we would like to answer three research questions: (i) What would be the effects of frontloading
in PD? (ii) Can we improve our PD process endlessly? (iii) When is the critical time that the firm should take remedial action for improvements? As a contribution to the vast numbers of improvement methods in new product development (NPD), this paper investigates the effects of front-loading using set-based concurrent engineering (SBCE) on cost and lead time. Models are developed and treated using a system dynamics (SD) approach. We assign a hypothetical upfront investment for SBCE and compare its effects on  total cost and lead time of the development process. From the research, it is found that the total cost of PD is reduced almost by half-although the front loading is higher in order to encompass multiple design alternatives. The total product lead time is reduced by almost 20 %. The model reveals the critical time for improvement of the PD process. We use SD tool (e.g., STELLA) for simulation and visualization of the complex PD model, using SBCE as one of several strategies to frontload activities in the NPD process.  相似文献   

19.
The D‐optimality criterion is often used in computer‐generated experimental designs when the response of interest is binary, such as when the attribute of interest can be categorized as pass or fail. The majority of methods in the generation of D‐optimal designs focus on logistic regression as the base model for relating a set of experimental factors with the binary response. Despite the advances in computational algorithms for calculating D‐optimal designs for the logistic regression model, very few have acknowledged the problem of separation, a phenomenon where the responses are perfectly separable by a hyperplane in the design space. Separation causes one or more parameters of the logistic regression model to be inestimable via maximum likelihood estimation. The objective of this paper is to investigate the tendency of computer‐generated, nonsequential D‐optimal designs to yield separation in small‐sample experimental data. Sets of local D‐optimal and Bayesian D‐optimal designs with different run (sample) sizes are generated for several “ground truth” logistic regression models. A Monte Carlo simulation methodology is then used to estimate the probability of separation for each design. Results of the simulation study confirm that separation occurs frequently in small‐sample data and that separation is more likely to occur when the ground truth model has interaction and quadratic terms. Finally, the paper illustrates that different designs with identical run sizes created from the same model can have significantly different chances of encountering separation.  相似文献   

20.
DNA hybridization in the vicinity of surfaces is a fundamental process for self‐assembled nanoarrays, nanocrystal superlattices, and biosensors. It is widely recognized that solid surfaces alter molecular forces governing hybridization relative to a bulk solution, and these effects can either favor or disfavor the hybridized state depending on the specific sequence and surface. Results presented here provide new insights into the dynamics of DNA hairpin‐coil conformational transitions in the vicinity of hydrophilic oligo(ethylene glycol) (OEG) and hydrophobic trimethylsilane (TMS) surfaces. Single‐molecule methods are used to observe the forward and reverse hybridization hairpin‐coil transition of adsorbed species while simultaneously measuring molecular surface diffusion in order to gain insight into surface interactions with individual DNA bases. At least 35 000 individual molecular trajectories are observed on each type of surface. It is found that unfolding slows and the folding rate increases on TMS relative to OEG, despite stronger attractions between TMS and unpaired nucleobases. These rate differences lead to near‐complete hairpin formation on hydrophobic TMS and significant unfolding on hydrophilic OEG, resulting in the surprising conclusion that hydrophobic surface coatings are preferable for nanotechnology applications that rely on DNA hybridization near surfaces.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号