首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
This paper presents a sampling-based RBDO method using surrogate models. The Dynamic Kriging (D-Kriging) method is used for surrogate models, and a stochastic sensitivity analysis is introduced to compute the sensitivities of probabilistic constraints with respect to independent or correlated random variables. For the sampling-based RBDO, which requires Monte Carlo simulation (MCS) to evaluate the probabilistic constraints and stochastic sensitivities, this paper proposes new efficiency and accuracy strategies such as a hyper-spherical local window for surrogate model generation, sample reuse, local window enlargement, filtering of constraints, and an adaptive initial point for the pattern search. To further improve computational efficiency of the sampling-based RBDO method for large-scale engineering problems, parallel computing is proposed as well. Once the D-Kriging accurately approximates the responses, there is no further approximation in the estimation of the probabilistic constraints and stochastic sensitivities, and thus the sampling-based RBDO can yield very accurate optimum design. In addition, newly proposed efficiency strategies as well as parallel computing help find the optimum design very efficiently. Numerical examples verify that the proposed sampling-based RBDO can find the optimum design more accurately than some existing methods. Also, the proposed method can find the optimum design more efficiently than some existing methods for low dimensional problems, and as efficient as some existing methods for high dimensional problems when the parallel computing is utilized.  相似文献   

2.
Reliability analysis and reliability-based design optimization (RBDO) require an exact input probabilistic model to obtain accurate probability of failure (PoF) and RBDO optimum design. However, often only limited input data is available to generate the input probabilistic model in practical engineering problems. The insufficient input data induces uncertainty in the input probabilistic model, and this uncertainty forces the PoF to be uncertain. Therefore, it is necessary to consider the PoF to follow a probability distribution. In this paper, the probability of the PoF is obtained with consecutive conditional probabilities of input distribution types and parameters using the Bayesian approach. The approximate conditional probabilities are obtained under reasonable assumptions, and Monte Carlo simulation is applied to calculate the probability of the PoF. The probability of the PoF at a user-specified target PoF is defined as the conservativeness level of the PoF. The conservativeness level, in addition to the target PoF, will be used as a probabilistic constraint in an RBDO process to obtain a conservative optimum design, for limited input data. Thus, the design sensitivity of the conservativeness level is derived to support an efficient optimization process. Using numerical examples, it is demonstrated that the conservativeness level should be involved in RBDO when input data is limited. The accuracy and efficiency of the proposed design sensitivity method is verified. Finally, conservative RBDO optimum designs are obtained using the developed methods for limited input data problems.  相似文献   

3.
This paper puts forward two new methods for reliability-based design optimization (RBDO) of complex engineering systems. The methods involve an adaptive-sparse polynomial dimensional decomposition (AS-PDD) of a high-dimensional stochastic response for reliability analysis, a novel integration of AS-PDD and score functions for calculating the sensitivities of the failure probability with respect to design variables, and standard gradient-based optimization algorithms, encompassing a multi-point, single-step design process. The two methods, depending on how the failure probability and its design sensitivities are evaluated, exploit two distinct combinations built on AS-PDD: the AS-PDD-SPA method, entailing the saddlepoint approximation (SPA) and score functions; and the AS-PDD-MCS method, utilizing the embedded Monte Carlo simulation (MCS) of the AS-PDD approximation and score functions. In both methods, the failure probability and its design sensitivities are determined concurrently from a single stochastic simulation or analysis. When applied in collaboration with the multi-point, single-step framework, the proposed methods afford the ability of solving industrial-scale design problems. Numerical results stemming from mathematical functions or elementary engineering problems indicate that the new methods provide more computationally efficient design solutions than existing methods. Furthermore, shape design of a 79-dimensional jet engine bracket was performed, demonstrating the power of the AS-PDD-MCS method developed to tackle practical RBDO problems.  相似文献   

4.
There are two commonly used analytical reliability analysis methods: linear approximation - first-order reliability method (FORM), and quadratic approximation - second-order reliability method (SORM), of the performance function. The reliability analysis using FORM could be acceptable in accuracy for mildly nonlinear performance functions, whereas the reliability analysis using SORM may be necessary for accuracy of nonlinear and multi-dimensional performance functions. Even though the reliability analysis using SORM may be accurate, it is not as much used for probability of failure calculation since SORM requires the second-order sensitivities. Moreover, the SORM-based inverse reliability analysis is rather difficult to develop.This paper proposes an inverse reliability analysis method that can be used to obtain accurate probability of failure calculation without requiring the second-order sensitivities for reliability-based design optimization (RBDO) of nonlinear and multi-dimensional systems. For the inverse reliability analysis, the most probable point (MPP)-based dimension reduction method (DRM) is developed. Since the FORM-based reliability index (β) is inaccurate for the MPP search of the nonlinear performance function, a three-step computational procedure is proposed to improve accuracy of the inverse reliability analysis: probability of failure calculation using constraint shift, reliability index update, and MPP update. Using the three steps, a new DRM-based MPP is obtained, which estimates the probability of failure of the performance function more accurately than FORM and more efficiently than SORM. The DRM-based MPP is then used for the next design iteration of RBDO to obtain an accurate optimum design even for nonlinear and/or multi-dimensional system. Since the DRM-based RBDO requires more function evaluations, the enriched performance measure approach (PMA+) with new tolerances for constraint activeness and reduced rotation matrix is used to reduce the number of function evaluations.  相似文献   

5.
For obtaining a correct reliability-based optimum design, the input statistical model, which includes marginal and joint distributions of input random variables, needs to be accurately estimated. However, in most engineering applications, only limited data on input variables are available due to expensive testing costs. The input statistical model estimated from the insufficient data will be inaccurate, which leads to an unreliable optimum design. In this paper, reliability-based design optimization (RBDO) with the confidence level for input normal random variables is proposed to offset the inaccurate estimation of the input statistical model by using adjusted standard deviation and correlation coefficient that include the effect of inaccurate estimation of mean, standard deviation, and correlation coefficient.  相似文献   

6.
Reliability-based design optimization (RBDO) aims at determination of the optimal design in the presence of uncertainty. The available Single-Loop approaches for RBDO are based on the First-Order Reliability Method (FORM) for the computation of the probability of failure, along with different approximations in order to avoid the expensive inner loop aiming at finding the Most Probable Point (MPP). However, the use of FORM in RBDO may not lead to sufficient accuracy depending on the degree of nonlinearity of the limit-state function. This is demonstrated for an extensively studied reliability-based design for vehicle crashworthiness problem solved in this paper, where all RBDO methods based on FORM strongly violates the probabilistic constraints. The Response Surface Single Loop (RSSL) method for RBDO is proposed based on the higher order probability computation for quadratic models previously presented by the authors. The RSSL-method bypasses the concept of an MPP and has high accuracy and efficiency. The method can solve problems with both constant and varying standard deviation of design variables and is particularly well suited for typical industrial applications where general quadratic response surface models can be used. If the quadratic response surface models of the deterministic constraints are valid in the whole region of interest, the method becomes a true single loop method with accuracy higher than traditional SORM. In other cases, quadratic response surface models are fitted to the deterministic constraints around the deterministic solution and the RBDO problem is solved using the proposed single loop method.  相似文献   

7.
We give a sampling-based algorithm for the k-Median problem, with running time O(k $(\frac{{k^2 }}{ \in } \log k)^2 $ log $(\frac{k}{ \in } \log k)$ ), where k is the desired number of clusters and ∈ is a confidence parameter. This is the first k-Median algorithm with fully polynomial running time that is independent of n, the size of the data set. It gives a solution that is, with high probability, an O(1)-approximation, if each cluster in some optimal solution has Ω $(\frac{{n \in }}{k})$ points. We also give weakly-polynomial-time algorithms for this problem and a relaxed version of k-Median in which a small fraction of outliers can be excluded. We give near-matching lower bounds showing that this assumption about cluster size is necessary. We also present a related algorithm for finding a clustering that excludes a small number of outliers.  相似文献   

8.
The ??direct product problem?? is a fundamental question in complexity theory which seeks to understand how the difficulty in computing a function on each of k independent inputs scales with k. We prove the following direct product theorem (DPT) for query complexity: if every T-query algorithm has success probability at most ${1 - \varepsilon}$ in computing the Boolean function f on input distribution???, then for ?? ?? 1, every ${\alpha \varepsilon Tk}$ -query algorithm has success probability at most ${(2^{\alpha \varepsilon}(1-\varepsilon))^k}$ in computing the k-fold direct product ${f^{\otimes k}}$ correctly on k independent inputs from???. In light of examples due to Shaltiel, this statement gives an essentially optimal trade-off between the query bound and the error probability. Using this DPT, we show that for an absolute constant ?? > 0, the worst-case success probability of any ?? R 2(f) k-query randomized algorithm for ${f^{\otimes k}}$ falls exponentially with k. The best previous statement of this type, due to Klauck, ?palek, and de Wolf, required a query bound of O(bs(f) k). Our proof technique involves defining and analyzing a collection of martingales associated with an algorithm attempting to solve ${f^{\otimes k}}$ . Our method is quite general and yields a new XOR lemma and threshold DPT for the query model, as well as DPTs for the query complexity of learning tasks, search problems, and tasks involving interaction with dynamic entities. We also give a version of our DPT in which decision tree size is the resource of interest.  相似文献   

9.
This paper proposes a methodology for sampling-based design optimization in the presence of interval variables. Assuming that an accurate surrogate model is available, the proposed method first searches the worst combination of interval variables for constraints when only interval variables are present or for probabilistic constraints when both interval and random variables are present. Due to the fact that the worst combination of interval variables for probability of failure does not always coincide with that for a performance function, the proposed method directly uses the probability of failure to obtain the worst combination of interval variables when both interval and random variables are present. To calculate sensitivities of the constraints and probabilistic constraints with respect to interval variables by the sampling-based method, behavior of interval variables at the worst case is defined by the Dirac delta function. Then, Monte Carlo simulation is applied to calculate the constraints and probabilistic constraints with the worst combination of interval variables, and their sensitivities. A merit of using an MCS-based approach in the X-space is that it does not require gradients of performance functions and transformation from X-space to U-space for reliability analysis, thus there is no approximation or restriction in calculating sensitivities of constraints or probabilistic constraints. Numerical results indicate that the proposed method can search the worst case probability of failure with both efficiency and accuracy and that it can perform design optimization with mixture of random and interval variables by utilizing the worst case probability of failure search.  相似文献   

10.
This study aims to develop an integrated computational framework for the reliability-based design optimization (RBDO) of wind turbine drivetrains to assure the target reliability under wind load and gear manufacturing uncertainties. Gears in wind turbine drivetrains are subjected to severe cyclic loading due to highly variable wind loads that are stochastic in nature. Thus, the failure rate of drivetrain systems is reported to be higher than the other wind turbine components, and improving drivetrain reliability is critically important in reducing downtime caused by gear failures. In the numerical procedure developed in this study, a wide spatiotemporal variability for wind loads is considered using 249 sets of wind data to evaluate probabilistic contact fatigue life in the sampling-based RBDO. To account for wind load uncertainty in evaluation of the tooth contact fatigue, multiple drivetrain dynamics simulations need to be run under various wind load scenarios in the RBDO process. For this reason, a numerical procedure based on the multivariable tabular contact search algorithm is applied to the modeling of wind turbine drivetrains to reduce the overall computational time while retaining the precise contact geometry required for considering the gear tooth profile optimization. An integrated computational framework for the wind turbine drivetrain RBDO is then developed by incorporating the wind load uncertainty, the rotor blade aerodynamics model, the drivetrain dynamics model, and the probabilistic contact fatigue failure model. It is demonstrated that the RBDO optimum for a 750 kW wind turbine drivetrain obtained using the procedure developed in this study can achieve the target 97.725% reliability (2 sigma quality level) with only a 1.4% increase in the total weight from the baseline design, which had a reliability of 8.3%. Furthermore, it is shown that the tooth profile optimization, tip relief introduced as a design variable, prevents a large increase of the face width that would result in a large increase in the weight (cost) of the drivetrain in order to satisfy the target reliability against the tooth contact fatigue failure.  相似文献   

11.
We consider discrete-time projective semilinear control systems \(\xi _{t+1} = A(u_t) \cdot \xi _t\) , where the states \(\xi _t\) are in projective space \(\mathbb {R}\hbox {P}^{d-1}\) , inputs \(u_t\) are in a manifold \(\mathcal {U}\) of arbitrary finite dimension, and \(A :\mathcal {U}\rightarrow \hbox {GL}(d,\mathbb {R})\) is a differentiable mapping. An input sequence \((u_0,\ldots ,u_{N-1})\) is called universally regular if for any initial state \(\xi _0 \in \mathbb {R}\hbox {P}^{d-1}\) , the derivative of the time- \(N\) state with respect to the inputs is onto. In this paper, we deal with the universal regularity of constant input sequences \((u_0, \ldots , u_0)\) . Our main result states that generically in the space of such systems, for sufficiently large \(N\) , all constant inputs of length \(N\) are universally regular, with the exception of a discrete set. More precisely, the conclusion holds for a \(C^2\) -open and \(C^\infty \) -dense set of maps \(A\) , and \(N\) only depends on \(d\) and on the dimension of \(\mathcal {U}\) . We also show that the inputs on that discrete set are nearly universally regular; indeed, there is a unique non-regular initial state, and its corank is 1. In order to establish the result, we study the spaces of bilinear control systems. We show that the codimension of the set of systems for which the zero input is not universally regular coincides with the dimension of the control space. The proof is based on careful matrix analysis and some elementary algebraic geometry. Then the main result follows by applying standard transversality theorems.  相似文献   

12.
We study the problem of answering k -hop reachability queries in a directed graph, i.e., whether there exists a directed path of length $k$ , from a source query vertex to a target query vertex in the input graph. The problem of $k$ -hop reachability is a general problem of the classic reachability (where $k=\infty $ ). Existing indexes for processing classic reachability queries, as well as for processing shortest path distance queries, are not applicable or not efficient for processing $k$ -hop reachability queries. We propose an efficient index for processing $k$ -hop reachability queries. Our experimental results on a wide range of real datasets show that our method is efficient and scalable in terms of both index construction and query processing.  相似文献   

13.
In a sampling problem, we are given an input x∈{0,1} n , and asked to sample approximately from a probability distribution \(\mathcal{D}_{x}\) over \(\operatorname{poly} ( n ) \) -bit strings. In a search problem, we are given an input x∈{0,1} n , and asked to find a member of a nonempty set A x with high probability. (An example is finding a Nash equilibrium.) In this paper, we use tools from Kolmogorov complexity to show that sampling and search problems are “essentially equivalent.” More precisely, for any sampling problem S, there exists a search problem R S such that, if \(\mathcal{C}\) is any “reasonable” complexity class, then R S is in the search version of \(\mathcal{C}\) if and only if S is in the sampling version. What makes this nontrivial is that the same R S works for every  \(\mathcal{C}\) . As an application, we prove the surprising result that SampP=SampBQP if and only if FBPP=FBQP. In other words, classical computers can efficiently sample the output distribution of every quantum circuit, if and only if they can efficiently solve every search problem that quantum computers can solve.  相似文献   

14.
Tensile membrane structures (TMS) are light-weight flexible structures that are designed to span long distances with structural efficiency. The stability of a TMS is jeopardised under heavy wind forces due to its inherent flexibility and inability to carry out-of-plane moment and shear. A stable TMS under uncertain wind loads (without any tearing failure) can only be achieved by a proper choice of the initial prestress. In this work, a double-loop reliability-based design optimisation (RBDO) of TMS under uncertain wind load is proposed. Using a sequential polynomial chaos expansion (PCE) and kriging based metamodel, this RBDO reduces the cost of inner-loop reliability analysis involving an intensive finite element solver. The proposed general approach is applied to the RBDO of two benchmark TMS and its computational efficiency is demonstrated through these case studies. The method developed here is suggested for RBDO of large and complex engineering systems requiring costly numerical solution.  相似文献   

15.
A separable input state consisting of an $n$ -photon Fock state and a coherent state propagating through coupled waveguides is investigated in detail. We obtained the analytical solutions for the state vector evolution, the wavefunction or probability distribution in the quadrature space and the $P$ -function in the phase space. It is proved that the propagating states may evolve into quantum vortex states even for coupled lossy waveguides by appropriately selecting the propagation time. Based on the analytical $P$ -function in phase space and the relative linear entropy for the propagating state, it is found that the propagating state may be entangled and non-classical. Specially, in absence of loss, the degree of entanglement only depends on the photon number $n$ of the input Fock state but is independent of the displacement parameter $\alpha $ associated with the input coherent state. Moreover, for coupled lossy waveguides the entanglement evolution can exhibit new features.  相似文献   

16.
Reliability-Based Design Optimization (RBDO) algorithms, such as Reliability Index Approach (RIA) and Performance Measure Approach (PMA), have been developed to solve engineering optimization problems under design uncertainties. In some existing methods, the random design space is transformed to standard normal design space and the reliability assessment, such as reliability index from RIA or performance measure from PMA, is estimated in order to evaluate the failure probability. When the random variable is arbitrarily distributed and cannot be properly fitted to any known form of probability density function, the existing RBDO methods cannot perform reliability analysis in the original design space. This paper proposes a novel Ensemble of Gradient-based Transformed Reliability Analyses (EGTRA) to evaluate the failure probability of any arbitrarily distributed random variables in the original design space. The arbitrary distribution of the random variable is approximated by a merger of multiple Gaussian kernel functions in a single-variate coordinate that is directed toward the gradient of the constraint function. The failure probability is then estimated using the ensemble of each kernel reliability analysis. This paper further derives a linearly approximated probabilistic constraint at the design point with allowable reliability level in the original design space using the aforementioned fundamentals and techniques. Numerical examples with generated random distributions show that existing RBDO algorithms can improperly approximate the uncertainties as Gaussian distributions and provide solutions with poor assessments of reliabilities. On the other hand, the numerical results show EGTRA is capable of efficiently solving the RBDO problems with arbitrarily distributed uncertainties.  相似文献   

17.
We strengthen a previously known connection between the size complexity of two-way finite automata ( ) and the space complexity of Turing machines (tms). Specifically, we prove that
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤s if and only if NL?L/poly, and
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤2 s if and only if NLL?LL/polylog.
  • Here, and are the deterministic and nondeterministic , NL and L/poly are the standard classes of languages recognizable in logarithmic space by nondeterministic tms and by deterministic tms with access to polynomially long advice, and NLL and LL/polylog are the corresponding complexity classes for space O(loglogn) and advice length poly(logn). Our arguments strengthen and extend an old theorem by Berman and Lingas and can be used to obtain variants of the above statements for other modes of computation or other combinations of bounds for the input length, the space usage, and the length of advice.  相似文献   

    18.
    Reliability-based design optimization (RBDO) is a methodology for finding optimized designs that are characterized with a low probability of failure. Primarily, RBDO consists of optimizing a merit function while satisfying reliability constraints. The reliability constraints are constraints on the probability of failure corresponding to each of the failure modes of the system or a single constraint on the system probability of failure. The probability of failure is usually estimated by performing a reliability analysis. During the last few years, a variety of different formulations have been developed for RBDO. Traditionally, these have been formulated as a double-loop (nested) optimization problem. The upper level optimization loop generally involves optimizing a merit function subject to reliability constraints, and the lower level optimization loop(s) compute(s) the probabilities of failure corresponding to the failure mode(s) that govern(s) the system failure. This formulation is, by nature, computationally intensive. Researchers have provided sequential strategies to address this issue, where the deterministic optimization and reliability analysis are decoupled, and the process is performed iteratively until convergence is achieved. These methods, though attractive in terms of obtaining a workable reliable design at considerably reduced computational costs, often lead to premature convergence and therefore yield spurious optimal designs. In this paper, a novel unilevel formulation for RBDO is developed. In the proposed formulation, the lower level optimization (evaluation of reliability constraints in the double-loop formulation) is replaced by its corresponding first-order Karush–Kuhn–Tucker (KKT) necessary optimality conditions at the upper level optimization. Such a replacement is computationally equivalent to solving the original nested optimization if the lower level optimization problem is solved by numerically satisfying the KKT conditions (which is typically the case). It is shown through the use of test problems that the proposed formulation is numerically robust (stable) and computationally efficient compared to the existing approaches for RBDO.  相似文献   

    19.
    Given a set of input points, the Steiner Tree Problem (STP) is to find a minimum-length tree that connects the input points, where it is possible to add new points to minimize the length of the tree. Solving the STP is of great importance since it is one of the fundamental problems in network design, very large scale integration routing, multicast routing, wire length estimation, computational biology, and many other areas. However, the STP is NP-hard, which shatters any hopes of finding a polynomial-time algorithm to solve the problem exactly. This is why the majority of research has looked at finding efficient heuristic algorithms. Additionally, many authors focused their work on utilizing the ever-increasing computational power and developed many parallel and distributed methods for solving the problem. In this way we are able to obtain better results in less time than ever before. Here, we present a survey of the parallel and distributed methods for solving the STP and discuss some of their applications.  相似文献   

    20.
    Golomb rulers are special rulers where for any two marks it holds that the distance between them is unique. They find applications in radio frequency selection, radio astronomy, data encryption, communication networks, and bioinformatics. An important subproblem in constructing “compact” Golomb rulers is Golomb Subruler  (GSR), which asks whether it is possible to make a given ruler Golomb by removing at most \(k\) marks. We initiate a study of GSR from a parameterized complexity perspective. In particular, we consider a natural hypergraph characterization of rulers and investigate the construction and structure of the corresponding hypergraphs. We exploit their properties to derive polynomial-time data reduction rules that reduce a given instance of GSR to an equivalent one with \({{\mathrm{O}}}(k^3)\)  marks. Finally, we complement a recent computational complexity study of GSR by providing a simplified reduction that shows NP-hardness even when all integers are bounded by a polynomial in the input length.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号