首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This work aims to evaluate the potential risks of incidents in nuclear research reactors. For its development, two databases of the International Atomic Energy Agency (IAEA) were used: the Research Reactor Data Base (RRDB) and the Incident Report System for Research Reactor (IRSRR). For this study, the probabilistic safety analysis (PSA) was used. To obtain the result of the probability calculations for PSA, the theory and equations in the paper IAEA TECDOC-636 were used. A specific program to analyse the probabilities was developed within the main program, Scilab 5.1.1. for two distributions, Fischer and chi-square, both with the confidence level of 90 %. Using Sordi equations, the maximum admissible doses to compare with the risk limits established by the International Commission on Radiological Protection (ICRP) were obtained. All results achieved with this probability analysis led to the conclusion that the incidents which occurred had radiation doses within the stochastic effects reference interval established by the ICRP-64.  相似文献   

2.
A two-phase methodology is presented as an aid to organizing job shop production in a cellular manufacturing system. The first phase (selection/assignment phase) selects the machines to be kept on the shop floor and assigns parts to the machines retained. The second phase (partition/reassignment phase) establishes a partition of the set of parts and corresponding cells of machines and reassigns some of the operations with a view to eliminating some intercell material movements. This phase is repeated until a partition meeting the operator's requirements is obtained. The results obtained with this method on several examples found in the literature are consistently equivalent to or even better than those hitherto proposed, in terms of intercell moves.  相似文献   

3.
An analysis is made of the transition to improved confinement (H-mode) observed in lower hybrid heating experiments in the FT-2 tokamak. Particular attention is paid to processes taking place near the wall, including the suppression of microfluctuations accompanying the L-H transition and the buildup of edge-localized modes (ELM activity). The conditions for transition to the H-mode are discussed only for Ohmic heating. The data are compared with the results of large tokamak experiments. Pis’ma Zh. Tekh. Fiz. 23, 52–57 (January 12, 1997)  相似文献   

4.
Ji J  Zhang Y  Zhou X  Kong J  Tang Y  Liu B 《Analytical chemistry》2008,80(7):2457-2463
An on-chip microreactor was proposed toward the acceleration of protein digestion through the construction of a nanozeolite-assembled network. The nanozeolite microstructure was assembled using a layer-by-layer technique based on poly(diallyldimethylammonium chloride) and zeolite nanocrystals. The adsorption of trypsin in the nanozeolite network was theoretically studied based on the Langmuir adsorption isotherm model. It was found that the controlled trypsin-containing nanozeolite networks assembled within a microchannel could act as a stationary phase with a large surface-to-volume ratio for the highly efficient proteolysis of both proteins at low levels and with complex extracts. The maximum proteolytic rate of the adsorbed trypsin was measured to be 350 mM min-1 microg-1, much faster than that in solution. Moreover, due the large surface-to-volume ratio and biocompatible microenvironment provided by the nanozeolite-assembled films as well as the microfluidic confinement effect, the low-level proteins down to 16 fmol per analysis were confidently identified using the as-prepared microreactor within a very short residence time coupled to matrix-assisted laser desorption-time-of-flight mass spectrometry. The on-chip approach was further demonstrated in the identification of the complex extracts from mouse macrophages integrated with two-dimensional liquid chromatography-electrospray ionization-tandem mass spectrometry. This microchip reactor is promising for the development of a facile means for protein identification.  相似文献   

5.
This paper proposes a methodology for designing job shops under the fractal layout organization that has been introduced as an alternative to the more traditional function and product organizations. We first begin with an illustration of how a fractal job shop is constituted from individual fractal cells. We then consider joint assignment of products and their processing requirements to fractal cells, the layout of workstation replicates in a fractal cell and the layout of cells with respect to each other. The main challenge in assigning flow to workstation replicates is that flow assignment is in itself a layout dependent decision problem. We confront this dilemma by proposing an iterative algorithm that updates layouts depending on flow assignments, and flow assignments based on layout. The proposed heuristic is computationally feasible as evidenced by our experience with test problems taken from the literature. We conclude by showing how the methodologies developed in this paper have helped us evaluate fractal job shop designs through specification of fractal cells, assignment of processing requirements to workstation replicates, and development of processor level layouts. This step has had the far-reaching consequence of demonstrating the viability and the validity of the fractal layout organization.  相似文献   

6.
Venkatadri el a.(HE Transactions, 29, 911-924, 1997) have proposed a new methodology for shop floor layout that involves the use of fractal cells and have compared the performance of their new layout with those obtained using the function, group and holographic layouts. A few inconsistencies are present in their results, expressed as flow scores. This note points out these inconsistencies through the use of appropriate examples.  相似文献   

7.
A methodology for crack tip mesh design is developed which consists of comparing the mesh geometric parameters against the accuracy of the finite element solution. By successive changes in the mesh parameters a near optimal mesh can be obtained. This was done here two-dimensional linear elastic single mode problems. The direct displacement extrapolation method for stress intensity factor estimation is used.  相似文献   

8.
A methodology is described for probabilistic predictions of future climate. This is based on a set of ensemble simulations of equilibrium and time-dependent changes, carried out by perturbing poorly constrained parameters controlling key physical and biogeochemical processes in the HadCM3 coupled ocean-atmosphere global climate model. These (ongoing) experiments allow quantification of the effects of earth system modelling uncertainties and internal climate variability on feedbacks likely to exert a significant influence on twenty-first century climate at large regional scales. A further ensemble of regional climate simulations at 25km resolution is being produced for Europe, allowing the specification of probabilistic predictions at spatial scales required for studies of climate impacts. The ensemble simulations are processed using a set of statistical procedures, the centrepiece of which is a Bayesian statistical framework designed for use with complex but imperfect models. This supports the generation of probabilities constrained by a wide range of observational metrics, and also by expert-specified prior distributions defining the model parameter space. The Bayesian framework also accounts for additional uncertainty introduced by structural modelling errors, which are estimated using our ensembles to predict the results of alternative climate models containing different structural assumptions. This facilitates the generation of probabilistic predictions combining information from perturbed physics and multi-model ensemble simulations. The methodology makes extensive use of emulation and scaling techniques trained on climate model results. These are used to sample the equilibrium response to doubled carbon dioxide at any required point in the parameter space of surface and atmospheric processes, to sample time-dependent changes by combining this information with ensembles sampling uncertainties in the transient response of a wider set of earth system processes, and to sample changes at local scales.The methodology is necessarily dependent on a number of expert choices, which are highlighted throughout the paper.  相似文献   

9.
This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.  相似文献   

10.
The problem addressed in this paper is the development of a physico-mathematical basis for mechanical tolerances. The lack of such a basis has fostered a decoupling of design (function) and manufacturing. The groundwork for a tolerancing methodology is laid by a model of profile errors, whose components are justified by physical reasoning and estimated using mathematical tools. The methodology is then presented as an evolutionary procedure that harnesses the various tools, as required, toanalyze profiles in terms of a minimum set of profile parameters and tore-generate them from the parameters. This equips the designer with a rational means for estimating performance prior to manufacturing, hence integrating design and manufacturing. The utility of thefunctional tolerancing methodology is demonstrated with performance simulations of a lathe-head-stock design, focusing on gear transmission with synthesized errors.  相似文献   

11.
The growing number of manufactured products has given rise to an alarming increase in the volume of industrial waste that is threatening the environment. However, if the various stages of a product are designed to be environmentally sustainable, ecological damage can be minimized, if not eliminated. This paper discusses a methodology that scores the cost, quality and environmental standing of four stages of the life cycle of a product. This is achieved through eight indices, or metrics, which depict the environmental standing of the product. The eight indices cover product cost, product reliability, serviceability and product retirement, among others. A self-learning algorithm is discussed that computes the best and worst values of the indices from a variety of similar products. This enables the designer to build up a comprehensive database of environmental data of a product. When displayed in a radar chart, the indices allow the environmental standing of a product to be quickly assessed, or compared with competitor designs. To emphasize their relative importance, weights may be assigned to the indices. Four case studies are presented and discussed. In the case of the injection-moulded multi-purpose holder, it was found that its reliability could be improved at the expense of manufacturability and retirement options. The eco-indices of 15 specimens of telecommunications paging devices, 13 amp 3-pin electrical plugs and 360 ml moulded drinking cups were computed and plotted on a radar chart. Overall, the analyses revealed that the five models of telecommunications pagers were not designed for end-of-life disposal and that the eco-efficiency of the electrical plugs depended on their country of manufacture, the ones made in the West being more environmentally benign. The drinking cups, on the other hand, illustrate the relative impact of different materials on the environment. The methodology can potentially benefit product designers, manufacturing engineers, sales/marketing personnel, in fact, all who have vested interests in environmentally friendly product design.  相似文献   

12.
The disregard of fast processes (e.g., the L-H transition) in a tokamak plasma can lead to large errors in the determination of the energy confinement time τE. A major upgrade in the electromagnetic diagnostic system for the analysis of plasma parameters and in the collection system has made it possible to take into account the influence of the transient character of the radial profile and the value of the plasma current I P as well as the stored energy W on the determination of τE from diamagnetic measurements and to investigate fast processes involved in the L-H transition. The energy confinement time τE is calculated from the equation τ E=W/[U PIP-d/dt(LI P 2 /2)-dW/dt] where U P is the plasma loop voltage (V), I P is the plasma current (A), and L is the total internal inductance (H). The total inductance L of the plasma column has been determined from measurements of the quantity β J+I i/2, where β J is the ratio between the gas-kinetic pressure and the pressure of the poloidal magnetic field, and I i is the internal inductance. The inclusion of transient behavior in the determination of τ E from diamagnetic measurements gives a correction of up to 50%. Pis’ma Zh. Tekh. Fiz. 23, 8–13 (October 26, 1997)  相似文献   

13.
The ability to adapt to changes in products, processes and technologies is a key competitive factor. Changeable manufacturing paradigms have emerged to address this need, but the industrial implementation remains challenging. In this paper, a participatory design methodology for changeable manufacturing systems is proposed, including requirements specification, selection of appropriate manufacturing paradigm and suitable physical and logical enablers. The methodology supports companies in determining the potential for and mechanisms of transitioning towards changeable manufacturing systems, based on knowledge of products, production, technologies and facilities. The developed methodology is applicable to both new and existing manufacturing systems. It is demonstrated in two industrial cases which highlight its applicability and differences in the appropriate recommended manufacturing systems transition towards changeability as a result of differences in manufacturing characteristics, change requirements and enablers.  相似文献   

14.
15.
Affective design and the determination of engineering specifications are commonly conducted separately in early product design stage. Generally, designers and engineers are required to determine the settings of design attributes (for affective design) and engineering requirements (for engineering design), respectively, for new products. Some design attributes and some engineering requirements could be common. However, the settings of the design attributes and engineering requirements could be different because of the separation of the two processes. In previous studies, a methodology that considers the determination of the settings of the design attributes and engineering requirements simultaneously was not found. To bridge this gap, a methodology for considering affective design and the determination of engineering specifications of a new product simultaneously is proposed. The proposed methodology mainly involves generation of customer satisfaction models, formulation of a multi-objective optimisation model and its solving using a chaos-based NSGA-II. To illustrate and validate the proposed methodology, a case study of mobile phone design was conducted. A validation test was conducted and the test results showed that the customer satisfaction values obtained based on the proposed methodology were higher than those obtained based on the combined standalone quality function deployment and standalone affective design approach.  相似文献   

16.
The influence of toroidal magnetic field on the energy confinement time ??E in the ohmic H-mode has been studied in the TUMAN-3M tokamak with low magnetic field. The experiments were performed at a toroidal magnetic field of B T = 0.68?1.0 T, which is about twice as large as that (0.25?C0.5 T) studied in analogous experiments on the NSTX and MAST spherical tokamaks. The results are indicative of a strong dependence of the energy confinement time on toroidal magnetic field: ??E ?? B T 0.75?C0.8 . This scaling is much stronger than that projected for the ITER (??E_IPB98 ?? B T 0.15 ), while being somewhat weaker than the scalings observed on the NSTX and MAST devices. The stronger (as compared to the ITER scaling) dependence of ??E on B T observed in these experiments should be taken into account in designing thermonuclear facilities with small aspect ratios and toroidal magnetic fields??in particular, fusion neutron sources.  相似文献   

17.
Companies design and manufacture widely diversified products to satisfy the needs of their customers and markets. Two issues important to achieving this aim are discussed. The first concerns adequate diversity for a particular market. The second concerns the management and manufacture of products within an acceptable lead time and an acceptable cost. The two issues are examined with a methodology for the design of products families. This methodology is based on a data-mining approach and it focuses on the analysis of functional requirements.  相似文献   

18.
A general optimization methodology for the optimal design of robotic manipulators is presented and illustrated by its application to a realistic and practical three‐link revolute‐joint planar manipulator. The end‐effector carries out a prescribed vertical motion for which, respectively, the average torque requirement from electrical driving motors, and the electric input energy to the driving motors are minimized with respect to positional and dimensional design variables. In addition to simple physical bounds placed on the variables, the maximum deliverable torques of the driving motors and the allowable joint angles between successive links represent further constraints on the system. The optimization is carried out via a penalty function formulation of the constrained problem to which a proven robust unconstrained optimization method is applied. The problem of singularities (also known as degeneracy or lock‐up), which may occur for certain choices of design variables, is successfully dealt with by means of a specially proposed procedure in which a high artificial objective function value is computed for such ‘lock‐up trajectories’. Designs are obtained that are feasible and practical with reductions in the objective functions in comparison to that of arbitrarily chosen infeasible initial designs. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

19.
Fatigue failures of machine components remain a topic of relevant importance in the industrial world. They usually occur from geometrical features such as holes, notches, corners and grooves, whose actual influence is not well estimated in the design phase. Cast parts made in gray cast iron are typical examples of components difficult to design in fatigue because they are simultaneously characterized by complex geometries and microstructure. In this contribution the issue is discussed starting from the failure analysis of a cyclically pressurized hydraulic component. The work consists of an experimental procedure, i.e. the fatigue characterization of the material on specimens extracted from cast parts, and of a numerical design activity, i.e. the prediction of life time according to the critical distance method [Taylor D. Crack modelling: a technique for the fatigue design of components. Engng Fail Anal 1996;3(2):129-36]. The implication is that cracks and localized damage begin to appear in the microstructure of gray cast iron at sharp notches from the first cycles of loading. In order to obtain a correct prediction, the fatigue design should adopt fracture mechanics arguments to determine non-propagating conditions.  相似文献   

20.
The analysis and optimization of complex multiphysics systems presents a series of challenges that limit the practical use of computational tools. Specifically, the optimization of such systems involves multiple interconnected components with competing quantities of interest and high‐dimensional spaces and necessitates the use of costly high‐fidelity solvers to accurately simulate the coupled multiphysics. In this paper, we put forth a data‐driven framework to address these challenges leveraging recent advances in machine learning. We combine multifidelity Gaussian process regression and Bayesian optimization to construct probabilistic surrogate models for given quantities of interest and explore high‐dimensional design spaces in a cost‐effective manner. The synergistic use of these computational tools gives rise to a tractable and general framework for tackling realistic multidisciplinary optimization problems. To demonstrate the specific merits of our approach, we have chosen a challenging large‐scale application involving the hydrostructural optimization of three‐dimensional supercavitating hydrofoils. To this end, we have developed an automated workflow for performing multiresolution simulations of turbulent multiphase flows and multifidelity structural mechanics (combining three‐dimensional and one‐dimensional finite element results), the results of which drive our machine learning analysis in pursuit of the optimal hydrofoil shape.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号