首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 18 毫秒
1.
Abstract

Both the US Department of Energy (DOE) and EPRI are developing models for the evolution of a secure energy future for the USA. Our general views are very similar. However, there are some differences in approach. DOE is concerned with all energy issues in the US future, including electricity, transportation fuels, and the industrial, commercial, and residential energy sectors; EPRI is concerned specifically with the electricity component, principally in the USA, but, as does DOE, also takes a global view.

Both organizations take what is now known as a ‘Roadmapping’ approach. Roadmapping is an example of a ‘Top Down’ planning method: it involves the specification of a “destination” which the research and development program is aimed towards. In the DOE case, the destination refers to a secure energy future. Typically, Roadmapping is concerned with relatively long time scales. Time scales for different technologies are, of course, very different; in a fast-moving technology such as semiconductors, five to ten years may be a long time. For energy, the equipment is large; planning and construction times are long, and the expected lifetimes of the major components are not less than twenty years, and more typically up to forty years. The time scale that both of our organizations talk about is in the range 20–50 years in the future. The DOE model is called ‘Vision 21.’ The specific destination for Vision 21 is the technical design bases for near-zero emission fossil fueled energy plants. The EPRI model is called the ‘Electricity Technology Roadmap’, and more recently we have ‘A Vision of the Electricity System of 2020.’ An important aspect of the method common to both DOE and EPRI is that the destination is developed by what is called a ‘Stakeholder’ group: this involves not only the researchers and developers, but also the eventual customers for the technology, and the users of the products. This will include members with environmental and societal concerns.

In this paper, we will highlight some of the scenarios that emerge from these models. The first part will concentrate on the Department of Energy program; the latter part on the EPRI view, remembering that we are in close agreement on most aspects.  相似文献   

2.
Routine applications of design of experiments (DOE) by non‐mathematicians assume that each response value is static in nature, i.e. with an expected value that is constant for a given set of input factor settings. When this assumption is not valid, it is important to capture the dynamic characteristics of the response for effective process or system characterization, monitoring, and control. With the purpose of recognizing the self‐changing nature of the response owing to factors other than those built into the DOE, thereby gaining a better ability to shape the behavior of the response, this paper describes the reasoning and procedure needed for such ‘parametric responses’, via common techniques of mathematical modeling and optimization. The procedure is intuitive but essential and useful in DOE studies as these become increasingly popular by practitioners in the context of improvement projects such as those related to Six Sigma or stand‐alone performance optimization initiatives. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
《Composites Part A》2004,35(10):1119-1123
The ‘black box’ of kinetics of the fracture process (damage accumulation) under short-term tensile loading is ‘elucidated’ from experimental and theoretical points of view. The material is injection moulded short glass fibre reinforced polyoximethylene, tested with three constant loading rates. An unconventional small angle X-ray scattering technique is applied for the registration of damage. A theoretical model for the analysis of damage development is proposed. The equations derived describe the damage accumulation with very high correlation coefficient (greater than 0.999). For the material investigated and for the applied loading rates the damage accumulation depends on the loading percentage, but not on the time. A qualitative distribution of all potential spots where damage could occur is shown. By means of the experimental investigations, the part of the ‘weakest’ spots of damage are to be measured, only. This part is very small as compared to the maximal potential damage.  相似文献   

4.
The identification (or ‘calibration’ or ‘inverse’) problem with in this paper, can be outlined as follows. The ‘real system’ is a deep rock formation which can be regarded as isotropic and elastoplastic. A mathematical-numerical model, intended for the analysis of its response to excavations, rests on the assumption of an elastic-perfectly plastic Mohr–Coulomb constitutive law and of a homogeneous isotropic initial (in situ) stress state. The values of cohesion, friction angle and initial stress to be introduced in this model are identified by minimizing a measure of the discrepancy (error) between theoretical and experimental relationship (pressure vs. average diameter increase), concerning a standard pressure tunnel test carried out well inside the nonlinear range. For the error minimization process, two very general ‘search techniques’ are adopted and discussed from the computational standpoint: the flexible polyhedron (modified simplex) strategy and the alternating variable strategy in the Rosenbrock version. Both are found to be adequate for solving this inverse problem, when the mathematical model has to be use as a ‘black box’ in a purely numerical identification process.  相似文献   

5.
Statistically designed experiments provide a proactive means for improving reliability; moreover, they can be used to design products that are robust to noise factors which are hard or impossible to control. Traditionally, failure‐time data have been collected; for high‐reliability products, it is unlikely that failures will occur in a reasonable testing period, so the experiment will be uninformative. An alternative, however, is to collect degradation data. Take, for example, fluorescent lamps whose light intensity decreases over time. Observation of light‐intensity degradation paths, given that they are smooth, provides information about the reliability of the lamp, and does not require the lamps to fail. This paper considers experiments with such data for ‘reliability improvement’, as well as for ‘robust reliability achievement’ using Taguchi's robust design paradigm. A two‐stage maximum‐likelihood analysis based on a nonlinear random‐effects model is proposed and illustrated with data from two experiments. One experiment considers the reliability improvement of fluorescent lamps. The other experiment focuses on robust reliability improvement of light‐emitting diodes. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

6.
Roux FS 《Applied optics》1995,34(23):5045-5052
I present an optical system for the polar formatting of data in a spotlight-mode synthetic aperture radar. This system is implemented with only one diffractive optical element (DOE). Previously such a DOE could not be produced because the phase of the required transmission function of the DOE does not obey the continuity condition, which is a prerequisite for the conventional implementation of such optical transforms. Here I show how a DOE can be produced to perform the complete polar-formatting transform by incorporating branch-point phase singularities in the transmission function of the DOE. The computation of the transmission function is shown, and numerically computed diffraction patterns obtained from this DOE are also shown.  相似文献   

7.
The effects of deep cryogenic treatment (DCT) on the static mechanical properties of the AISI 302 austenitic stainless steel were investigated through experimental testing. The results of the tensile and hardness tests are discussed and compared to data and microstructural observations from the DCT literature concerning the same class of steel. In addition, the influence of two important treatment parameters, such as the soaking-time and the minimum temperature, is analysed through a full factorial design of experiments (DOE) and by means of a first approximation model in order to obtain confirmation and suggestions about the possible use of the DCT as a standard practice to improve the mechanical properties of stainless steels. A particular focus is given to the registered changes of the elastic modulus and of the hardness as representative measures of two different deformation mechanisms.  相似文献   

8.
Interfacial fatigue crack growth in foam core sandwich structures   总被引:1,自引:0,他引:1  
This paper deals with the experimental measurement of face/core interfacial fatigue crack growth rates in foam core sandwich beams. The so-called ‘cracked sandwich beam’ specimen is used, slightly modified, which is a sandwich beam that has a simulated face/core interface crack. The specimen is precracked so that a more realistic crack front is created prior to fatigue growth measurements. The crack is then propagated along the interface, in the core material, during fatigue loading, as is assumed to occur in a real sandwich structure. The crack growth is stable even under constant amplitude testing. Stress intensity factors are obtained from the FEM which, combined with the experimental data, result in standard da/dN versus ΔK curves for which classical Paris’ law constants can be extracted. The experiments to determine stress intensity factor threshold values are performed using a manual load-shedding technique.  相似文献   

9.
Although the groove and slot have been widely utilized for horn design to achieve high uniformity, their effects on uniformity have not been analyzed thoroughly. In this work, spool and bar horns for ultrasonic bonding are designed in a systematic way using the design of experiments (DOE) to achieve high amplitude uniformity of the horn. Three-dimensional modal analysis is conducted to predict the natural frequency, amplitude, and stress of the horns, and the DOE is employed to analyze the effects of the groove and slot on the amplitude uniformity. The design equations are formulated to determine the optimum dimensions of the groove and slot, and the uniformity is found to be influenced most significantly by the groove depth and slot width. Displacements of the spool and bar horns were measured using a laser Doppler vibrometer (LDV), and the predicted results are in good agreement with the experimental data.  相似文献   

10.
Most tolerance design optimization problems have focused on developing exact methods to reduce manufacturing cost or to increase product quality. The inherent assumption with this approach is that assembly functions are known before a tolerance design problem is analyzed. With the current development of CAD (Computer‐Aided Design) software, design engineers can address the tolerance design problem without knowing assembly functions in advance. In this study, VSA‐3D/Pro software, which contains a set of simulation tools, is employed to generate experimental assembly data. These computer experimental data will be converted into other forms such as total cost and Process Capability Index. Total cost consists of tolerance cost and quality loss. Then, empirical equations representing two variables can be obtained through a statistical regression method. After that, mathematical optimization and sensitivity analysis are performed within the constrained ‘desired design and process’ space. Consequently, tolerance design via computer experiments enables engineers to optimize design tolerance and manufacturing variation to achieve the highest quality at the most cost effective price during the design and planning stage. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

11.
Implementation of closed-loop supply chain (CLSC) has gained increased consideration in the last few years owing to an increase in environmental concerns, product returns and scarcity of natural resources. It aids in improving environmental, economic and social performances. The purpose of this study is to examine the impact of CLSC critical success factors (CSFs) on performance outcomes. Firstly, CSFs and performance outcomes are extracted by conducting exploratory factor analysis using SPSS software. Then, the relationships between CLSC CSFs and performance outcomes are empirically tested by Partial Least Squares-Structural Equation Modelling (PLS-SEM) approach, using the data collected from 138 professionals working in remanufacturing, refurbishing and recycling operations in North American manufacturing organisations. Empirical analysis demonstrates that CSFs, ‘environmental concerns’, ‘sustainable production’ and ‘product design and collection’ have a significant positive effect on environmental performance. Results also validate the significant positive effect of CSFs, ‘demand and inventory management’ and ‘raw material prices’ on economic performance. In our knowledge, this is the first study that examines the impact of CLSC CSFs on performance outcomes. The results provide managers in manufacturing organisations with insights on most important CSFs that improve performance.  相似文献   

12.
Recently, various methods have been proposed to assess the risk of rolling contact fatigue failure by Ekberg, Kabo and Andersson, and in particular, the Dang Van multiaxial fatigue criterion has been suggested in a simple approximate formulation. In this note, it is found that the approximation implied can be very significant; the calculation is improved and corrected, and focused on the study of plane problems but for a complete range of possible friction coefficients. It is found that predicted fatigue limit could be much higher than that under standard uniaxial tension/compression for ‘hard materials’ than for ‘ductile materials.’ This is in qualitative agreement, for example, with gears' design standards, but in quantitative terms, particularly for frictionless condition, the predicted limit seems possibly too high, indicating the need for careful comparison with experimental results. Some comments are devoted to the interplay of shakedown and fatigue.  相似文献   

13.
The standard fatigue data‐processing procedure, published in ASTM E647, is not adapted to the use of modern crack length measurement techniques. Because the use of this standard is usually required by journals, the raw data are often reduced to only a few data points. This way valuable information is simply thrown away and mathematical errors are unintentionally made. More importantly, the fact that no satisfactory reduction method exists, has led to destandardization of the processing procedure. Therefore, a new standard processing method is desired. In this paper a new data‐processing method, referred to as the ‘adaptive da/dN method’, is proposed and discussed. This method is suitable for both optical and modern (electrical or automated) measurement techniques as well as modern (computer‐assisted or ‐controlled) processing techniques. The adaptive da/dN method is validated both by data generated with a certain amount of scatter as well as actual experimental data. It shows a more accurate behaviour than the ASTM standards for all data types.  相似文献   

14.
Proper numerical modeling of the Friction Stir Processes (FSPs) requires the identification of a suitable constitutive equation which accurately describes the stress-strain material behavior under an applicable range of strains, strain rates, and temperatures. While some such equations may be perfectly suitable to simulate processes characterized by low (or high) strains and temperatures, FSPs are widely recognized for their relatively moderate ranges of such state variables. In this work, a number of constitutive equations for describing flow stress in metals were screened for their suitability for modeling Friction Stir Processes of twin roll cast (TRC) wrought magnesium Mg–AL–Zn (AZ31B) alloy. Considered were 4 different reported variations of the popular Johnson–Cook equation and one Sellars–Tegart equation along with their literature–reported coefficients for fitting AZ31B stress–strain behavior. In addition, 6 variations of the (rarely used in FSPs simulations) Zerilli–Armstrong equation were also considered along with their literature–reported coefficients. The screening assessment was based on how well the considered constitutive equations fit experimental tensile stress–strain data of twin roll cast wrought AZ31B. Goodness of fit and residual sum of squares were the two statistical criteria utilized in the quantitative assessment whereas a ‘visual ’ measure was used as a qualitative measure. Initial screening resulted in the selection of one best fitting constitutive equation representing one of each of the Johnson–Cook, Sellars–Tegart, and Zerilli–Armstrong equations. An HCP–specific Zerilli–Armstrong constitutive equation (dubbed here as ZA6 ) was found to have the best quantitative and qualitative fit results with an R2 value of 0.967 compared to values of 0.934 and 0.826 for the Johnson–Cook and Sellars–Tegart constitutive equations, respectively. Additionally, a 3D thermo–mechanically coupled FEM model was built in DEFORM 3D to simulate the experimental tensile test from which the experimental load–deflection data was obtained. The three ‘finalist ’ equations were fed into the FEM simulations and were compared based on the 1) simulations’ running times and 2) goodness of fit of the simulation results to the experimental load–deflection data. It was found that the ZA6 constitutive equation exhibited favorable run times even when contrasted against the simpler mathematical form of the Sellars–Tegart equation. On average, the ZA6 equation showed improvements in solution time by 5.4% as compared with the Johnson–Cook equation and almost identical solution time (0.9% increase) with that of the ST equation. This result indicates that the proposed equation is not numerically expensive and can be safely adopted in such FEM simulations. Based on the favorable running times and goodness of fit, it was concluded that the HCP–specific Zerilli–Armstrong constitutive equation ZA6 holds an advantage over all other considered equations and was, therefore, selected as most suitable for the numerical modeling of FSP of twin roll cast AZ31B.  相似文献   

15.
Microelectromechanical systems (MEMS) are devices that represent the integration of mechanical and electrical components in the micrometer regime. Self-assembled monolayers (SAMs) can be used to functionalise the surface of MEMS resonators in order to fabricate chemically specific mass sensing devices. The work carried out in this article uses atomic force microscopy (AFM) and X-ray photoemission spectroscopy (XPS) data to investigate the pH-dependent adsorption of citrate-passivated Au nanoparticles to amino-terminated Si3N4 surfaces. AFM, XPS and mass adsorption experiments, using ‘flap’ type resonators, show that the maximum adsorption of nanoparticles takes place at pH = 5. The mass adsorption data, obtained using amino functionalised ‘flap’ type MEMS resonators, shows maximum adsorption of the Au nanoparticles at pH = 5 which is in agreement with the AFM and XPS data, which demonstrates the potential of such a device as a pH responsive nanoparticle detector.  相似文献   

16.
A simplified analytical model is devised based on equivalent layered approach using the concepts of ‘rule of mixtures’ and ‘series and parallel capacitance theory’ to find the effect of bonding layer thickness and the thermal environment on the effective properties of Macro-Fiber Composites (MFC). A thermo–electro–mechanical analysis is performed based on finite element calculations using unit-cell method accounting for the geometric properties of constituents and non-uniform electric field distribution across the electrodes. Also, experiments are performed on commercially available MFCs under electrical load in various thermal environments to evaluate the coupling constants. The predictions based on the proposed models are validated with experimental results and data from the manufacturer.  相似文献   

17.
18.
This paper examines the current use of the Arrhenius equation and the activation energies associated with integrated circuits. A short introduction to the Arrhenius equation is given and it is shown how activation energies may be computed from experimental data. The paper next addresses the following basic question: is it reasonable to extrapolate from high-temperature laboratory life-tests down to use conditions using the Arrhenius equation? The paper shows that the answer to this question in general must be an emphatic no. It is not the validity of the Arrhenius equation as such that is questioned, but its indiscriminate use by reliability practitioners across the electronics industry. This is not a scientific paper, and no ‘proofs’ are given. The discussion is based on ‘typical’ integrated circuits using published data on the temperature dependence of the hazard rate of such circuits. The paper serves as a warning against thinking that ‘well established practices’ are necessarily scientifically sound.  相似文献   

19.
介绍了重复性在计量标准中的重要作用,针对只提供稳定的、满足一定不确定度要求的量值(单值或多值)计量标准装置,对其重复性的试验方法进行讨论,并以"覆膜电极溶解氧测定仪"为例,对其测量数据进行计算分析,对于此类计量标准装置的重复性进行考核时,应考虑实际情况,选择最确切反映计量标准重复性的试验方法。  相似文献   

20.
The experimental determination of the fracture toughness of inter-ply interfaces in monolithic composite specimens is far from trivial: even in standard test methods such as the Double Cantilever Beam (DCB), some precautions must be taken in the choice of the test configurations and in the post-treatment of the experimental results. Furthermore, non standard measurements such as the crack tip position during propagation are generally required. In this paper, we investigate an alternative test configuration, the Climbing Drum Peel (CDP) test, classically used in the ‘adhesives’ community. The adaptation of the CDP specimen configurations to the testing of monolithic composites is discussed and a systematic comparison is carried out between the CDP and the DCB tests in terms of global and local indicators of the crack propagation behavior.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号