首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 709 毫秒
1.
Taboo search is a heuristic optimization technique which works with a neighbourhood of solutions to optimize a given objective function. It is generally applied to single objective optimization problems. Taboo search has the potential for solving multiple objective optimization (MOO) problems, because it works with more than one solution at a time, and this gives it the opportunity to evaluate multiple objective functions simultaneously. In this paper, a taboo search based algorithm is developed to find Pareto optimal solutions in multiple objective optimization problems. The developed algorithm has been tested with a number of problems and compared with other techniques. Results obtained from this work have proved that a taboo search based algorithm can find Pareto optimal solutions in MOO effectively.  相似文献   

2.
Numerical optimisation of superplastic deformation   总被引:1,自引:0,他引:1  
Based on an approach due to Padmanabhan and Davies, a multi-dimensional regression analysis has been developed which predicts the superplastic deformation parameters ofm (the strain-rate sensitivity index) andK (the strength parameter) as functions of strain rate, grain size and temperature. Further analysis enables the optimisation of the operating conditions (for minimum power consumption) through a prediction of the external load and power consumption using the predicted values ofm andK. The procedure has been validated by applying it for the analysis of the experimental data pertaining to the tin-lead eutectic alloy. It has been pointed out that the technique could be useful for problems (not necessarily in the area of superplasticity) where a particular parameter depends on a number of independent variables.  相似文献   

3.
The non-Gaussian Karhunen–Loeve (K–L) expansion is very attractive because it can be extended readily to non-stationary and multi-dimensional fields in a unified way. However, for strongly non-Gaussian processes, the original procedure is unable to match the distribution tails well. This paper proposes an effective solution to this tail mismatch problem using a modified orthogonalization technique that reduces the degree of shuffling within columns containing empirical realizations of the K–L random variables. Numerical examples demonstrate that the present algorithm is capable of matching highly non-Gaussian marginal distributions and stationary/non-stationary covariance functions simultaneously to a very accurate degree. The ability to converge correctly to an abrupt lower bound in the target marginal distributions studied is noteworthy.  相似文献   

4.
Importance sampling (IS) is a useful simulation technique to estimate critical probability with a better accuracy than Monte Carlo methods. It consists in generating random weighted samples from an auxiliary distribution rather than the distribution of interest. The crucial part of this algorithm is the choice of an efficient auxiliary PDF that has to be able to simulate more rare random events. The optimisation of this auxiliary distribution is often in practice very difficult. In this article, we propose to approach the IS optimal auxiliary density with non-parametric adaptive importance sampling (NAIS). We apply this technique for the probability estimation of spatial launcher impact position since it has currently become a more and more important issue in the field of aeronautics.  相似文献   

5.
摘要:提出了一种新的基于分解法的最大熵随机有限元方法,利用单变量分解将多维随机响应函数表述为单维随机响应函数的组合形式,从而将求解随机结构响应统计矩的多维积分表达式转化为单维积分式,对单维积分采用高斯-埃尔米特积分格式求解。在获得结构响应的统计矩之后,利用最大熵原理求得结构响应的概率密度函数解析表达式。该法不涉及求导运算,对于非线性随机问题非常适用。算例结果表明,本文方法具有较好的精度与计算效率。
  相似文献   

6.
Variable screening and ranking using sampling-based sensitivity measures   总被引:12,自引:0,他引:12  
This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables.  相似文献   

7.
本文研究带有多维诊断参数的两部件系统,其中一个部件具有正常和故障两种状态,另一个部件具有正常、异常和故障三种状态。当系统工作时,为了判断后一个部件是正常还是异常,每隔一段随机时间对系统检测一次,直到系统故障或检测结果为该部件异常为止。利用概率分析、补充变量、Laplace变换和最优化方法,我们得到了系统的可靠性指标,并以此为基础导出了诊断参数的最优临界值和最优检测周期。  相似文献   

8.
With most controlled release oral drug dosage forms, dissolution is the rate limiting step in drug release. While in vivo drug absorption and elimination involve a number of complex factors, characterization of in vitro dissolution rate under controlled conditions (pH, solvent, speed, etc.) should be able to provide valuable insights into in vivo drug bioavailability

Frequently, the analysis of these factors becomes obscured when a variety of data are presented in conventional two dimensional plots. The choice of approval or disapproval of a new drug product based on such data becomes difficult. We have therefore examined the characteristics of drug product dissolution using a multi-dimensional technique available in SAS as a means of more effectively delineating properties of dissolution rate. The results of our studies show that more definitive information can be discerned in a multi-dimensional topographic image which has been shown to be predictive of in vivo drug plasma concentrations  相似文献   

9.
李可欣  郭健  王宇君  李宗明  缪坤  陈辉 《包装工程》2023,44(11):284-292
目的 有效分析和探索海洋船舶时空轨迹行为模式,提高船舶轨迹聚类的效率与质量,更好地检测真实船舶的异常行为。方法 针对当前船舶轨迹数据研究中存在的对多维特征信息利用不足、检测效率不高、检测精度较差等问题,提出一种精确度高、能自主识别分析多维特征的船舶异常轨迹识别方法。首先利用随机森林分类器评估多维特征重要性,构建轨迹特征的最优组合;然后提出一种降维密度聚类方法,将T–分布随机邻域嵌入(T–SNE)和自适应密度聚类(DBSCAN)模型结合,通过构建特征选择层和无监督聚类层实现对数据元素非线性关系的高效提取以及对聚类参数的智能选择;最后根据聚类结果构建类簇特征向量,计算距离阈值判别轨迹相似度,实现轨迹异常检测模型的构建。结果 以UCI数据集为例,降维密度聚类方法对4、13、30、64维特征数据集的F1分数能达到0.9 048、0.9 534、0.8 218、0.6 627,多个聚类指标均优于DBSCAN、K–Means等常见聚类算法的。结论 研究结果表明,降维密度聚类方法能有效提取数据多维特征结构,实现聚类参数自适应,弥补密度聚类中参数难以确定的问题,有效实现对多种类型船舶轨迹异常的识别。  相似文献   

10.
A new class of computational methods, referred to as decomposition methods, has been developed for predicting failure probability of structural and mechanical systems subject to random loads, material properties, and geometry. The methods involve a novel function decomposition that facilitates univariate and bivariate approximations of a general multivariate function, response surface generation of univariate and bivariate functions, and Monte Carlo simulation. Due to a small number of original function evaluations, the proposed methods are very effective, particularly when a response evaluation entails costly finite-element, mesh-free, or other numerical analysis. Seven numerical examples involving elementary mathematical functions and solid-mechanics problems illustrate the methods developed. Results indicate that the proposed methods provide accurate and computationally efficient estimates of probability of failure.  相似文献   

11.
In this paper, a linearly conforming radial point interpolation method (LC-RPIM) is presented for the linear analysis of shells. The first order shear deformation shell theory is adopted, and the radial and polynomial basis functions are employed to construct the shape functions. A strain smoothing stabilization technique for nodal integration is used to restore the conformability and to improve the accuracy. Convergence studies are performed in terms of the number of nodes and the nodal distribution patterns, including the regular distribution and the irregular distribution. Comparisons are made with the existing results available in the literature and good agreements are obtained. The numerical examples have demonstrated that the present approach provides very stable and accurate results and effectively eliminates the membrane locking and shear locking in shell problems.  相似文献   

12.
Abstract

With most controlled release oral drug dosage forms, dissolution is the rate limiting step in drug release. While in vivo drug absorption and elimination involve a number of complex factors, characterization of in vitro dissolution rate under controlled conditions (pH, solvent, speed, etc.) should be able to provide valuable insights into in vivo drug bioavailability

Frequently, the analysis of these factors becomes obscured when a variety of data are presented in conventional two dimensional plots. The choice of approval or disapproval of a new drug product based on such data becomes difficult. We have therefore examined the characteristics of drug product dissolution using a multi-dimensional technique available in SAS as a means of more effectively delineating properties of dissolution rate. The results of our studies show that more definitive information can be discerned in a multi-dimensional topographic image which has been shown to be predictive of in vivo drug plasma concentrations  相似文献   

13.
实际工程中的岩土参数往往是小样本情况,利用经典概率分布(如正态分布、对数正态分布、贝塔分布、威布尔分布等)推断其最优概率类型时,存在样本分布区间不匹配、限定范围选择和单峰值概率分布函数无法反映参数随机波动性等三个未解决的基本问题。为此,选取了五种不同取值区间,利用10组小样本岩土参数的实测样本,以勒让德多项式和第二类切比雪夫多项式对岩土参数的概率分布函数进行推断,利用K-S检验法对所得的概率分布函数进行检验。通过区间取值匹配、拟合检验值和累积概率值的比较,提出了以3σ原则为基础,同时考虑偏度进行调整的积分区间确定标准。研究结果表明:以有限比较法原理确定最优概率分布的阶数,所得最优概率分布能够反映小样本数据的随机波动;而且正交多项式推断方法所得概率分布函数的检验值均小于传统分布的检验值,说明所得分布更加符合岩土参数的实际情况。  相似文献   

14.
The central theme of this paper is multiplicative polynomial dimensional decomposition (PDD) methods for solving high‐dimensional stochastic problems. When a stochastic response is dominantly of multiplicative nature, the standard PDD approximation, predicated on additive function decomposition, may not provide sufficiently accurate probabilistic solutions of a complex system. To circumvent this problem, two multiplicative versions of PDD, referred to as factorized PDD and logarithmic PDD, were developed. Both versions involve a hierarchical, multiplicative decomposition of a multivariate function, a broad range of orthonormal polynomial bases for Fourier‐polynomial expansions of component functions, and a dimension‐reduction or sampling technique for estimating the expansion coefficients. Three numerical problems involving mathematical functions or uncertain dynamic systems were solved to corroborate how and when a multiplicative PDD is more efficient or accurate than the additive PDD. The results show that indeed, both the factorized and logarithmic PDD approximations can effectively exploit the hidden multiplicative structure of a stochastic response when it exists. Since a multiplicative PDD recycles the same component functions of the additive PDD, no additional cost is incurred. Finally, the random eigensolutions of a sport utility vehicle comprising 40 random variables were evaluated, demonstrating the ability of the new methods to solve industrial‐scale problems. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
The reasonable modeling of a nonstationary stochastic turbulent wind field is an important basis and premise for the analysis of the wind-induced response and reliability of engineering structures. In the present study, two dimension-reduction probabilistic models are established for simulating the multi-dimensional and multi-variable nonstationary turbulent wind fields based on the double proper orthogonal decomposition (DPOD) and the double spectral representation method (DSRM). Among them, the DPOD, originally used to simulate a stationary turbulent wind field, is extended to a nonstationary one, and the DSRM is a newly proposed method for a nonstationary turbulent wind field with a large number of simulation points. In essence, the DPOD is a discrete method with explicit physical significance and flexible spatial location of simulation points, while the DSRM is a continuous method, of which the simulation efficiency is theoretically independent of the number of simulation points. Furthermore, by introducing the dimension-reduction methods based on random function and POD-FFT (Fast Fourier transform) technique into the DPOD and the DSRM, the nonstationary stochastic turbulent wind field can be effectively described with merely three elementary random variables. Numerical examples of the nonstationary stochastic turbulent wind fields acting on a long-span bridge and a communication tower fully verify the effectiveness and superiority of the proposed methods.  相似文献   

16.
This paper presents a new, univariate dimension-reduction method for calculating statistical moments of response of mechanical systems subject to uncertainties in loads, material properties, and geometry. The method involves an additive decomposition of a multi-dimensional response function into multiple one-dimensional functions, an approximation of response moments by moments of single random variables, and a moment-based quadrature rule for numerical integration. The resultant moment equations entail evaluating N number of one-dimensional integrals, which is substantially simpler and more efficient than performing one N-dimensional integration. The proposed method neither requires the calculation of partial derivatives of response, nor the inversion of random matrices, as compared with commonly used Taylor expansion/perturbation methods and Neumann expansion methods, respectively. Nine numerical examples involving elementary mathematical functions and solid-mechanics problems illustrate the proposed method. Results indicate that the univariate dimension-reduction method provides more accurate estimates of statistical moments or multidimensional integration than first- and second-order Taylor expansion methods, the second-order polynomial chaos expansion method, the second-order Neumann expansion method, statistically equivalent solutions, the quasi-Monte Carlo simulation, and the point estimate method. While the accuracy of the univariate dimension-reduction method is comparable to that of the fourth-order Neumann expansion, a comparison of CPU time suggests that the former is computationally far more efficient than the latter.  相似文献   

17.
目的 针对康养辅具产品多维设计要素与用户情感满意度的复杂映射问题,提出了一种基于评论数据的产品情感满意度预测模型。方法 首先爬取样本评论数据,并利用LDA模型构建产品多维设计要素空间,同时对样本进行要素编码;其次对各样本对应的评论数据进行情感满意度分析,以获得情感满意度分值;最后通过BP神经网络构建情感满意度预测模型,结合K-fold交叉验证法测试模型可靠性。结果 以康养辅具电动轮椅为例,通过30个样本数据集交叉验证后得到预测值与期望值的均方误差值为0.044 1,表明该预测模型精度良好,具有较高的可靠性。最后利用该预测模型对104 976个随机组合方案进行情感满意度计算,得出电动轮椅最佳组合方案。结论 基于用户评论数据构建的产品情感满意度预测模型,能够有效建立产品多维设计要素与情感满意度间的匹配关系,辅助设计人员快速识别出用户情感满意度较高的多维设计要素组合,提高了设计方案决策的科学性。  相似文献   

18.
This study investigates the predictability of the form and parameters of the distribution of materials handling volume distance values associated with layout problems characterized by static distance functions. Previous research suggests that volume distance distributions associated with line layout problems tend toward normality. This study extends these results to more general cases of layout problems where distances between workcentre locations do not vary with alternative assignments of workcentres to locations. The feasibility of generating estimates of volume distance distribution parameters using random samples of layout alternatives is also investigated. Volume distance parameter estimates are used to define satisfying criteria for heuristic layout algorithms, which are illustrated through a series of quadratic assignment problems with known optimal solutions.  相似文献   

19.
This paper presents a procedure for obtaining compromise designs of structural systems under stochastic excitation. In particular, an effective strategy for determining specific Pareto optimal solutions is implemented. The design goals are defined in terms of deterministic performance functions and/or performance functions involving reliability measures. The associated reliability problems are characterized by means of a large number of uncertain parameters (hundreds or thousands). The designs are obtained by formulating a compromise programming problem which is solved by a first-order interior point algorithm. The sensitivity information required by the proposed solution strategy is estimated by an approach that combines an advanced simulation technique with local approximations of some of the quantities associated with structural performance. An efficient Pareto sensitivity analysis with respect to the design variables becomes possible with the proposed formulation. Such information is used for decision making and tradeoff analysis. Numerical validations show that only a moderate number of stochastic analyses (reliability estimations) has to be performed in order to find compromise designs. Two example problems are presented to illustrate the effectiveness of the proposed approach.  相似文献   

20.
The increasing interest of the research community to the probabilistic analysis concerning the civil structures with space-variant properties points out the problem of achieving a reliable discretization of random processes (or random fields in a multi-dimensional domain). Given a discretization method, a continuous random process is approximated by a finite set of random variables. Its dimension affects significantly the accuracy of the approximation, in terms of the relevant properties of the continuous random process under investigation. The paper presents a discretization procedure based on the truncated Karhunen–Loève series expansion and the finite element method. The objective is to link in a rational way the number of random variables involved in the approximation to a quantitative measure of the discretization accuracy. The finite element method is applied to evaluate the terms of the series expansion when a closed form expression is not available. An iterative refinement of the finite element mesh is proposed in this paper, leading to an accurate random process discretization. The technique is tested with respect to the exponential covariance function, that enables a comparison with analytical expressions of the approximated properties of the random process. Then, the procedure is applied to the square exponential covariance functions, which is one of the most used covariance models in the structural engineering field. The comparison of the adaptive refinement of the discretization with a non-adaptive procedure and with the wavelet Galerkin approach allows to demonstrate the computational efficiency of the proposal within the framework of the Karhunen–Loève series expansion. A comparison with the Expansion Optimal Linear Estimation (EOLE) method is performed in terms of efficiency of the discretization strategy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号