首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Published studies and audits have documented that a significant number of U.S. Army systems are failing to demonstrate established reliability requirements. In order to address this issue, the Army developed a new reliability policy in December 2007 which encourages use of cost-effective reliability best practices. The intent of this policy is to improve reliability of Army systems and material, which in turn will have a significant positive impact on mission effectiveness, logistics effectiveness and life-cycle costs. Under this policy, the Army strongly encourages the use of Physics of Failure (PoF) analysis on mechanical and electronics systems. At the US Army Materiel Systems Analysis Activity, PoF analyses are conducted to support contractors, program managers and engineers on systems in all stages of acquisition from design, to test and evaluation (T&E) and fielded systems. This article discusses using the PoF approach to improve reliability of military products. PoF is a science-based approach to reliability that uses modeling and simulation to eliminate failures early in the design process by addressing root-cause failure mechanisms in a computer-aided engineering environment. The PoF approach involves modeling the root causes of failure such as fatigue, fracture, wear, and corrosion. Computer-aided design tools have been developed to address various loads, stresses, failure mechanisms, and failure sites. This paper focuses on understanding the cause and effect of physical processes and mechanisms that cause degradation and failure of materials and components. A reliability assessment case study of circuit cards consisting of dense circuitry is discussed. System level dynamics models, component finite element models and fatigue-life models were used to reveal the underlying physics of the hardware in its mission environment. Outputs of these analyses included forces acting on the system, displacements of components, accelerations, stress levels, weak points in the design and probable component life. This information may be used during the design process to make design changes early in the acquisition process when changes are easier to make and are much more cost effective. Design decisions and corrective actions made early in the acquisition phase leads to improved efficiency and effectiveness of the T&E process. The intent is to make fixes prior to T&E which will reduce test time and cost, allow more information to be obtained from test and improve test focus. PoF analyses may be conducted for failures occurring during test to better understand the underlying physics of the problem and identify the root cause of failures which may lead to better fixes for problems discovered, reduced test-fix-test iterations and reduced decision risk. The same analyses and benefits mentioned above may be applied to systems which are exhibiting failures in the field.  相似文献   

2.
Karabuk  Suleyman  Wu  S. David 《IIE Transactions》2002,34(9):743-759
Semiconductor capacity planning is a cross-functional decision that requires coordination between the marketing and manufacturing divisions. We examine the main issues of a decentralized coordination scheme in a setting observed at a major US semiconductor manufacturer: marketing managers reserve capacity from manufacturing based on product demands, while attempting to maximize profit; manufacturing managers allocate capacity to competing marketing managers so as to minimize operating costs while ensuring efficient resource utilization. This cross-functional planning problem has two important characteristics: (i) both demands and capacity are subject to uncertainty; and (ii) all decision entities own private information while being self-interested. To study the issues of coordination we first formulate the local marketing and the manufacturing decision problem as separate stochastic programs. We then formulate a centralized stochastic programming model (JCA), which maximizes the firm's overall profit. JCA establishes a theoretical benchmark for performance, but is only achievable when all planning information is public. If local decision entities are to keep their planning information private, we submit that the best achievable coordination corresponds to an alternative stochastic model (DCA). We analyze the relationship and the theoretical gap between (JCA) and )DCA), thereby establishing the price of decentralization. Next, we examine two mechanisms that coordinate the marketing and manufacturing decisions to achieve (DCA) using different degrees of information exchange. Using insights from the Auxiliary Problem Principle (APP), we show that under both coordination mechanisms the divisional proposals converge to the global optimal solution of (DCA). We illustrate the theoretic insights using numerical examples as well as a real world case.  相似文献   

3.
During the last 30 years, enterprise modelling has been recognised as an efficient tool to externalise the knowledge of companies in order to understand their operations, to analyse their running and to design new systems from several points of view: functions, processes, decisions, resources and information technology. This paper aims at describing the long evolution of enterprise modelling techniques as well as one of the future challenges of these techniques: the transformation of enterprise models. So, in a first part, the paper describes the evolution of enterprise modelling techniques from the divergence era to the convergence period. In a second time, the paper focuses on the recent advances in the use of enterprise models through model-driven approaches, interoperability problem-solving and simulation, all these advances having the same characteristic to use the transformation of enterprise models.  相似文献   

4.
This work considers an NP-Hard scheduling problem that is fundamental to the production planning of flexible machines, to cutting pattern industries and also to the design of VLSI circuits. A new asynchronous collective search model is proposed, exploring the search space in a manner that concentrates effort onto those areas of higher perceived potential. This is done with the use of a coordination policy which enables the processes with greatest performance to act as ‘attractors’ to those processes trapped in areas of worse perceived potential. Numerical results are obtained for problems of realistic industrial size, and those results are compared to previously published optimal solutions. This comparison demonstrates the effectiveness of the method, as in 276 problems out of a set of 280 we are able to match previously reported optimal results.  相似文献   

5.
In many real-world optimization problems, the underlying objective and constraint function(s) are evaluated using computationally expensive iterative simulations such as the solvers for computational electro-magnetics, computational fluid dynamics, the finite element method, etc. The default practice is to run such simulations until convergence using termination criteria, such as maximum number of iterations, residual error thresholds or limits on computational time, to estimate the performance of a given design. This information is used to build computationally cheap approximations/surrogates which are subsequently used during the course of optimization in lieu of the actual simulations. However, it is possible to exploit information on pre-converged solutions if one has the control to abort simulations at various stages of convergence. This would mean access to various performance estimates in lower fidelities. Surrogate assisted optimization methods have rarely been used to deal with such classes of problem, where estimates at various levels of fidelity are available. In this article, a multiple surrogate assisted optimization approach is presented, where solutions are evaluated at various levels of fidelity during the course of the search. For any solution under consideration, the choice to evaluate it at an appropriate fidelity level is derived from neighbourhood information, i.e. rank correlations between performance at different fidelity levels and the highest fidelity level of the neighbouring solutions. Moreover, multiple types of surrogates are used to gain a competitive edge. The performance of the approach is illustrated using a simple 1D unconstrained analytical test function. Thereafter, the performance is further assessed using three 10D and three 20D test problems, and finally a practical design problem involving drag minimization of an unmanned underwater vehicle. The numerical experiments clearly demonstrate the benefits of the proposed approach for such classes of problem.  相似文献   

6.
This study presents an efficient methodology that derives design alternatives and performance criteria for safety functions/systems in commercial nuclear power plants. Determination of the design alternatives and intermediate-level performance criteria is posed as a reliability allocation problem. The reliability allocation is performed in a single step by means of the concept of two-tier noninferior solutions in the objective and risk spaces within the top-level probabilistic safety criteria (PSC). Two kinds of two-tier noninferior solutions are obtained: desirable design alternatives and intolerable intermediate-level PSC of safety functions/systems.The weighted Chebyshev norm (WCN) approach with an improved Metropolis algorithm in simulated annealing is used to find the two-tier noninferior solutions. This is very efficient in searching for the global minimum of the difficult multiobjective optimization problem (MOP) which results from strong nonlinearity of a probabilistic safety assessment (PSA) model and nonconvexity of the problem. The methodology developed in this study can be used as an efficient design tool for desirable safety function/system alternatives and for the determination of intermediate-level performance criteria.The methodology is applied to a realistic streamlined PSA model that is developed based on the PSA results of the Surry Unit 1 nuclear power plant. The methodology developed in this study is very efficient in providing the intolerable intermediate-level PSC and desirable design alternatives of safety functions/systems.  相似文献   

7.
One of the most important decisions in hybrid make-to-stock/make-to-order (MTS/MTO) production systems is capacity coordination. This paper addresses capacity coordination of hybrid MTS/MTO production systems which deal with MTS, MTO and MTS/MTO products. The proposed model is developed to cope with order acceptance/rejection policy, order due-date setting, lot-sizing of MTS products and determining required capacity during the planning horizon. Additionally, a backward lot-sizing algorithm is developed to tackle the lot-sizing problem. The proposed model presents a general framework to decide on capacity coordination without too many limiting mathematical assumptions. The model combines qualitative and quantitative modules to cope with the aforementioned problems. Finally, a real industrial case study is reported to provide validity and applicability of the proposed model. Having the model applied in the case study, considerable improvement was achieved.  相似文献   

8.
Reliability optimization using multiobjective ant colony system approaches   总被引:1,自引:0,他引:1  
The multiobjective ant colony system (ACS) meta-heuristic has been developed to provide solutions for the reliability optimization problem of series-parallel systems. This type of problems involves selection of components with multiple choices and redundancy levels that produce maximum benefits, and is subject to the cost and weight constraints at the system level. These are very common and realistic problems encountered in conceptual design of many engineering systems. It is becoming increasingly important to develop efficient solutions to these problems because many mechanical and electrical systems are becoming more complex, even as development schedules get shorter and reliability requirements become very stringent. The multiobjective ACS algorithm offers distinct advantages to these problems compared with alternative optimization methods, and can be applied to a more diverse problem domain with respect to the type or size of the problems. Through the combination of probabilistic search, multiobjective formulation of local moves and the dynamic penalty method, the multiobjective ACSRAP, allows us to obtain an optimal design solution very frequently and more quickly than with some other heuristic approaches. The proposed algorithm was successfully applied to an engineering design problem of gearbox with multiple stages.  相似文献   

9.
This paper extends our study of general blocking kanban control mechanisms in multicell manufacturing. The structural results developed in part I of this twopaper sequence are used to develop an efficient design framework for the optimal configuration of kanban control systems. The structural properties of the design problems and their relationships are established. The framework consists of optimization algorithms for the design problems over neighbourhood lattice design spaces. Extensive computational analyses show that the proposed algorithms determine the optimal configuration in a neighbourhood by exploring about 3% of the neighbourhood set. The quality of the neighbourhood solutions has been demonstrated by comparing them with benchmark strategies and an upper bound on throughput. The computational efficiency and the quality of solutions show that the proposed approach is efficient and practically viable.  相似文献   

10.
A. Barreiros 《工程优选》2013,45(5):475-488
A new numerical approach to the solution of two-stage stochastic linear programming problems is described and evaluated. The approach avoids the solution of the first-stage problem and uses the underlying deterministic problem to generate a sequence of values of the first-stage variables which lead to successive improvements of the objective function towards the optimal policy. The model is evaluated using an example in which randomness is described by two correlated factors. The dynamics of these factors are described by stochastic processes simulated using lattice techniques. In this way, discrete distributions of the random parameters are assembled. The solutions obtained with the new iterative procedure are compared with solutions obtained with a deterministic equivalent linear programming problem. It is concluded that they are almost identical. However, the computational effort required for the new approach is negligible compared with that needed for the deterministic equivalent problem.  相似文献   

11.
This article considers a scheduling problem arising in flexible manufacturing systems. It is assumed that a computer numerical control machine processes a set of jobs with a set of wearing tools. The tool magazine of the machine has a given capacity and each job requires some subset of tools. The goal is to minimize the average completion time of the jobs by choosing their processing order and the tool management decisions intelligently. Previous studies concerning this problem have either omitted the tool wearing or assumed only one tool type. This study gives a mathematical formulation for the problem when the tool lifetimes are deterministic. It is shown that problems of a practical size cannot be solved to optimality within a reasonable time. Therefore genetic algorithms and local search methods are considered to resolve the problem. When the solutions of these new algorithms are compared against the optimal solutions and lower bounds, they are nearly optimal.  相似文献   

12.
不定二次约束二次规划问题广泛应用于芯片设计、无线通信网络、财政金融和众多工程实际问题.目前尚没有通用的全局收敛准则,这使得求解该问题的全局最优解面临着极大挑战.本文使用矩阵的初等变换技巧将原问题转化为等价双线性规划问题,基于等价问题的特征和线性化松弛技巧构造了等价问题的松弛线性规划,通过求解一系列松弛规划问题的最优解逐步逼近原问题的全局最优解.证明了算法的全局收敛性,并进行数值对比和随机实验,实验结果表明算法高效可行.  相似文献   

13.
DIviding RECTangles (DIRECT), as a well-known derivative-free global optimization method, has been found to be effective and efficient for low-dimensional problems. When facing high-dimensional black-box problems, however, DIRECT's performance deteriorates. This work proposes a series of modifications to DIRECT for high-dimensional problems (dimensionality d>10). The principal idea is to increase the convergence speed by breaking its single initialization-to-convergence approach into several more intricate steps. Specifically, starting with the entire feasible area, the search domain will shrink gradually and adaptively to the region enclosing the potential optimum. Several stopping criteria have been introduced to avoid premature convergence. A diversification subroutine has also been developed to prevent the algorithm from being trapped in local minima. The proposed approach is benchmarked using nine standard high-dimensional test functions and one black-box engineering problem. All these tests show a significant efficiency improvement over the original DIRECT for high-dimensional design problems.  相似文献   

14.
To remain competitive and gain new shares of the market, industries must develop their products quickly while meeting the multiple customer requirements. To reduce product development time, the design step is often accomplished by several working groups working in parallel. These working groups are often decentralized and are supervised by a director. This paper focuses on solving a multi-objective problem in a setting that is called a “decentralized environment.” Collaborative optimization is a strategy used for solving problems in a decentralized environment. This strategy divides a problem into subproblems in order to give more autonomy to working groups, thus facilitating work in parallel. In this paper, collaborative optimization is paired with an interactive algorithm to solve multi-objective problems in a decentralized environment. It can be easily adjusted within the structure of a development process in a given industry and allows collaboration between the director and his/her working groups. The algorithm captures the director’s and the working groups’ preferences and generates several Pareto-optimal solutions. The algorithm was tested on a two-bar structure problem. The results obtained match those published in the literature.  相似文献   

15.
In this work we present a mixed-integer model for the optimal design of production/transportation systems. In contrast to standard design problems, our model is originally based on a coupled system of differential equations capturing the dynamics of manufacturing processes and stocks. The problem is to select an optimal parameter configuration from a predefined set such that respective constraints are fulfilled. We focus on single commodity flows over large time scales as well as highly interconnected networks and propose a suitable start heuristic to ensure feasibility and to speed up the solution procedure.  相似文献   

16.
This paper puts forward a quantitative approach aimed at the understanding of the evolutionary paths of change of emerging nanotechnological innovation systems. The empirical case of the newly emerging zinc oxide one-dimensional nanostructures is used. In line with other authors, ‘problems’ are visualized as those aspects guiding the dynamics of innovation systems. It is argued that the types of problems confronted by an innovation system, and in turn its dynamics of change, are imprinted on the nature of the underlying knowledge bases. The latter is operationalized through the construction of co-citation networks from scientific publications. We endow these co-citation networks with directionality through the allocation of a particular problem, drawn from a ‘problem space’ for nanomaterials, to each network node. By analyzing the longitudinal, structural and cognitive changes undergone by these problem-attached networks, we attempt to infer the nature of the paths of change of emerging nanotechnological innovation systems. Overall, our results stress the evolutionary mechanisms underlying change in a specific N&N subfield. It is observed that the latter may exert significant influence on the innovative potentials of nanomaterials.  相似文献   

17.
The use of molecular-dynamics simulations to understand the ejection processes of particles from surfaces after energetic ion bombardment is discussed. Substrates considered include metals, covalent and ionic materials, polymers and molecular solids. It is shown how the simulations can be used to aid interpretation of experimental results by providing the underlying mechanisms behind the ejection processes.  相似文献   

18.
Published data demonstrating the possibilities of electrochemical methods as applied to solving particular problems arising in various steps of industrial reprocessing of spent nuclear fuel using the Purex process are summarized. Attention is given to stabilization of U, Np, and Pu in required valence states in aqueous solutions and two-phase systems in the course of Pu stripping in the step of its separation from U, to efficient dissolution of PuO2, to isolation of noble metals from high-level liquid waste, and to breakdown of spent organic solvents and complexing agents. Progress in the industrial use of electrolysis, in search for new electrode materials, and in studies of the mechanisms of electrocatalytic oxidation processes is noted.  相似文献   

19.
A production logistics system is often subject to high operational dynamics due to large working areas, frequent resource interactions, long operation periods and intensive human involvement. Researchers have applied system dynamics to design the structure of statistically robust systems which accommodate common dynamics. Yet this approach begins to lose its feasibility because dynamics anticipation and statistics are becoming more difficult in ever more competitive markets and adjustments to system structure typically incur high costs. In response, this study explores how a robust information structure can be designed and real-time control schemes for controlling the dynamics inherent to real-life systems applied. Motivated by the wide application of industrial Internet-of-Things (IoT) systems, this paper investigates the typical production logistic execution processes and adopts system dynamics to design cost-effective IoT solutions. The internal and external production logistic processes are first investigated separately. Using sensitivity analysis, the optimal IoT solutions are evaluated and analysed to provide guidance on IoT implementation. Internal and external production logistic processes are then combined into an integrated structure to offer a generic system dynamics approach. This research does not only enhance the use of system dynamics, but also presents a quantitative IoT system analysis approach.  相似文献   

20.
We study the integrated logistics network design and inventory stocking problem as characterized by the interdependency of the design and stocking decisions in service parts logistics. These two sets of decisions are usually considered sequentially in practice, and the associated problems are tackled separately in the research literature. The overall problem is typically further complicated due to time-based service constraints that provide lower limits on the percentage of demand satisfied within specified time windows. We introduce an optimization model that explicitly captures the interdependency between network design (location of facilities, and allocation of demands to facilities) and inventory stocking decisions (stock levels and their corresponding stochastic fill rates), and present computational results from our extensive experiments that investigate the effects of several factors including demand levels, time-based service levels and costs. We show that the integrated approach can provide significant cost savings over the decoupled approach (solving the network design first and inventory stocking next), shifting the whole efficient frontier curve between cost and service level to superior regions. We also show that the decoupled and integrated approaches may generate totally different solutions, even in the number of located facilities and in their locations, magnifying the importance of considering inventory as part of the network design models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号