首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
舰船水下接触爆炸问题涉及多相物质耦合,而且密度分布极不均匀,阻抗严重不匹配,大变形,强冲击等因素都使传统数值算法很难进行数值模拟。本文旨在通过改进传统SPH算法,并将其应用于水下接触爆炸问题:提出了变光滑长度的链表搜索算法,提高了计算效率,也保证了计算精度;总结了SPH方法对密度极大不均匀问题处理思想;研究了SPH方法的前、后期处理技术,提高了对问题的处理能力,增加了计算结果的可视性。文中最后成功模拟了水下接触爆炸问题,结果与理论一致,验证了所提出的理论和方法的有效性和可行性。另外,分析了不同粒子间距比对数值模拟的影响,结果表明,当粒子的间距比小于2:1时数值结果相对稳定,等于5:1时数值计算崩溃。  相似文献   

2.
A proof of concept for a model-less target detection and classification system for side-scan imagery is presented. The system is based on a supervised approach that uses augmented reality (AR) images for training computer added detection and classification (CAD/CAC) algorithms, which are then deployed on real data. The algorithms are able to generalise and detect real targets when trained on AR ones, with performances comparable with the state-of-the-art in CAD/CAC. To illustrate the approach, the focus is on one specific algorithm, which uses Bayesian decision and the novel, purpose-designed central filter feature extractors. Depending on how the training database is partitioned, the algorithm can be used either for detection or classification. Performance figures for these two modes of operation are presented, both for synthetic and real targets. Typical results show a detection rate of more that 95% and a false alarm rate of less than 5%. The proposed supervised approach can be directly applied to train and evaluate other learning algorithms and data representations. In fact, a most important aspect is that it enables the use of a wealth of legacy pattern recognition algorithms for the sonar CAD/CAC applications of target detection and target classification  相似文献   

3.
Local modelling with radial basis function networks   总被引:1,自引:0,他引:1  
Different types of radial basis function networks (RBFN) training algorithms are described and compared. Advantages and drawbacks of some of these algorithms are demonstrated on simulated and real data. Interpretability of the final models is emphasized.  相似文献   

4.
This article uses a hybrid optimization approach to solve the discrete facility layout problem (FLP), modelled as a quadratic assignment problem (QAP). The idea of this approach design is inspired by the ant colony meta-heuristic optimization method, combined with the extended great deluge (EGD) local search technique. Comparative computational experiments are carried out on benchmarks taken from the QAP-library and from real life problems. The performance of the proposed algorithm is compared to construction and improvement heuristics such as H63, HC63-66, CRAFT and Bubble Search, as well as other existing meta-heuristics developed in the literature based on simulated annealing (SA), tabu search and genetic algorithms (GAs). This algorithm is compared also to other ant colony implementations for QAP. The experimental results show that the proposed ant colony optimization/extended great deluge (ACO/EGD) performs significantly better than the existing construction and improvement algorithms. The experimental results indicate also that the ACO/EGD heuristic methodology offers advantages over other algorithms based on meta-heuristics in terms of solution quality.  相似文献   

5.
A production scheduling problem originating from a real rotor workshop is addressed in the paper. Given its specific characteristics, the problem is formulated as a re-entrant hybrid flow shop scheduling problem with machine eligibility constraints. A mixed integer linear programming model of the problem is provided and solved by the Cplex solver. In order to solve larger sized problems, a discrete differential evolution (DDE) algorithm with a modified crossover operator is proposed. More importantly, a new decoder addressing the machine eligibility constraints is developed and embedded to the algorithm. To validate the performance of the proposed DDE algorithm, various test problems are examined. The efficiency of the proposed algorithm is compared with two other algorithms modified from the existing ones in the literatures. A one-way ANOVA analysis and a sensitivity analysis are applied to intensify the superiority of the new decoder. Tightness of due dates and different levels of scarcity of machines subject to machine eligibility restrictions are discussed in the sensitivity analysis. The results indicate the pre-eminence of the new decoder and the proposed DDE algorithm.  相似文献   

6.
卫田  范文慧  高丽  王威 《高技术通讯》2006,16(12):1259-1264
针对多学科设计优化的数值算法比较研究上存在的不足,提出了算法在进行优化时所需的时间、解决问题个数及选用目标函数的相对精度等三项评估标准相结合的三维算法比较方法,首次将精度作为算法比较的一个重要指标,由此得到的算法比较三维模型,为算法选择提供了更加合理的理论依据.在理论研究的基础上,对组合算法和数值算法进行了比较,突破了传统算法比较局限在数值算法的不足.结果表明,在时间变化不大的情况下,组合算法的精度比单纯的数值算法有大幅度的提高,为工程应用提供了更全面的支持.在此基础上给出了数值算法及其组合的算法选择流程.最后,通过手机的多学科设计优化实例,验证了所提出的算法选择流程的合理性和可行性.  相似文献   

7.
In most large field of view (FOV) observations, the distortion problem is inevitably and significantly more serious than in small FOV ones. In the circumstances, many traditional star identification approaches are not able to efficiently identify stars any more. In order to deal with this problem, we put forward a star identification method that is less sensitive to distortion. The method first processes stars in the central area of the image, using traditional identification logic, and then applies the region growing strategy to enlarge the identified regions iteratively until the entire image is covered. The performance of the new scheme is analysed in the presence of both simulated data and real data. The results show that the proposed algorithm has the advantage of speed, and the strategy of regional extension can efficiently identify stars in large FOV images compared with other existing algorithms.  相似文献   

8.
The operation-sequencing problem in process planning is considered to produce a part with the objective of minimizing the sum of machine, setup and tool change costs. In general, the problem has combinatorial characteristics and complex precedence relations, which makes the problem difficult to solve. Six local search heuristics have been developed based on simulated annealing and tabu search to obtain good solutions for practical-sized problems within a reasonable amount of computation time. Application of the algorithms is illustrated using an example part. Also, computational experiments were done on randomly generated problems and the results show that the tabu search-based algorithms are better than the simulated annealing-based algorithms on overall average. In particular, one of the tabu search algorithms suggested here gave optimal solutions for most small-sized test problems within very short computation times.  相似文献   

9.
The performance of Radio‐Isotope IDentification (RIID) algorithms using gamma spectroscopy is increasingly becoming important. For example, sensors at locations that screen for illicit nuclear material rely on isotope identification to resolve innocent nuisance alarms arising from naturally occurring radioactive material. Recent data collections for RIID testing consist of repeat measurements for each of several scenarios to test RIID algorithms. Efficient allocation of measurement resources requires an appropriate number of repeats for each scenario. To help allocate measurement resources in such data collections for RIID algorithm testing, we consider using only a few real repeats per scenario. In order to reduce uncertainty in the estimated RIID algorithm performance for each scenario, the potential merit of augmenting these real repeats with realistic synthetic repeats is also considered. Our results suggest that for the scenarios and algorithms considered, approximately 10 real repeats augmented with simulated repeats will result in an estimate having comparable uncertainty to the estimate based on using 60 real repeats. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

10.
11.
The speeds of algorithms that are specifically designed to solve sparse matrix equations depend on the ordering of the unknowns. Because it is difficult to know what a good ordering is, many resequencing algorithms have been developed to reorder the equations in a manner that minimizes the execution time of the solver being used. There is no theoretical way of evaluating resequencing algorithms, but four widely used algorithms (Cuthill–Mckee, Gibbs–Poole–Stockmeyer, Levy, Gibbs–King) have been compared with one another on the basis of their performance on a set of benchmark test problems. This paper reports what we believe to be are minimal or near-minimal matrix profiles and wavefronts for the benchmark problems. Comparisons of the minimal results with those produced by the widely used resequencing algorithms show that they produce profiles typically a few tens of per cent greater than minimal, but 50 per cent to 100 per cent greater on two problem types. The algorithm that produced the near-minimal results used a simulated annealing technique, and is far too slow for general use.  相似文献   

12.
Brain imaging genetics is a popular research topic on evaluating the association between genetic variations and neuroimaging quantitative traits (QTs). As a bi‐multivariate analysis method, sparse canonical correlation analysis (CCA) is a useful technique which identifies efficiently genetic diseases on the brain with modeling dependencies between the variables of genotype data and phenotype data. The initial efforts on evaluating several space CCA methods are made for brain imaging genetics. A linear model is proposed to generate realistic imaging genomic data with selected genotype‐phenotype associations from real data and effectively capture the sparsity underlying projects. Three space CCA algorithms are applied to the synthetic data, and show better or comparable performance on the synthetic data in terms of the estimated canonical correlations. They have successfully identified an important association between genotype and phenotype. Experiments on simulated and real imaging genetic data show that approximating covariance structure using an identity or diagonal matrix and the approach used in these space CCA algorithms could limit the space CCA capability in identifying the underlying imaging genetics associations. Further development depends largely on enhanced space CCA methods that effectively pay attention to the covariance structures in simulated and real imaging genetics data.  相似文献   

13.
Maintenance actions can be classified, according to their efficiency, into three categories: perfect maintenance, imperfect maintenance, and minimal maintenance. To date, the literature on imperfect maintenance is voluminous, and many models have been developed to treat imperfect maintenance. Yet, there are two important problems in the community of maintenance that still remain wide open: how to give practical grounds for an imperfect-maintenance model, and how to test the fit of a real dataset to an imperfect-maintenance model. Motivated by these two pending problems, this work develops an imperfect-maintenance model by taking a physically meaningful approach. For the practical implementation of the developed model, we advance two methods, called QMI method and spacing-likelihood algorithm, to estimate involved unknown parameters. The two methods complete each other and are widely applicable. To offer a practical guide for testing fit to an imperfect-maintenance model, this work promotes a bootstrapping approach to approximating the distribution of a test statistic. The attractions and dilemmas of QMI method and spacing-likelihood algorithm are revealed via simulated data. The utility of the developed imperfect-maintenance model is evidenced via a real dataset. This article has a supplementary material online.  相似文献   

14.
A comprehensive and systematic strategy for evaluating the performances of several trilinear second-order calibration algorithms is presented in this paper, in particular with a view of practical applications. Several trilinear second-order calibration methods such as PARAFAC, ATLD, SWATLD and APTLD, which have the “second-order advantage” and are gaining widespread acceptance in the field of chemometrics, were compared. Based on different input parameters including noise level, initial value, number of estimated components and collinearity in simulated and real data, the performances of these methods were evaluated in terms of predicting ability, consistency of resolved and real profiles, fitness obtained by selected components and speed of convergence. The obtained results give a reevaluation of the position and role of these trilinear second-order calibration methods in chemometrics and provide a guidance in practical applications for solving complicated quantitative analysis problems in analytical chemistry. It is useful and helpful to choose, for example, which algorithm would be more suitable for predicting the concentration of the analyte(s) of interest even in the presence of unknown interferents in complex systems.  相似文献   

15.
针对桥梁结构动力测试信号噪声水平高、难以分离结构有效信号的特点,在总体平均经验模态分解方法和主成分分析的基础上,建立了自适应分解与重构方法。对经验模态分解结果的模态混叠现象进行深入分析,利用白噪声概率密度函数的均匀性对模态混叠模式一进行了改进,基于相关性分析改进了模态混叠模式二,改进后的分解方法在计算效率和分解精度上均有较大提升;随后对所有分解获得的固有模态函数进行多尺度主成分分析,实现降噪和选择并重构测试信号。分别用模拟信号和实际桥梁测试信号对所提方法的有效性进行了验证。结果表明:改进后的信号自适应分解和重构方法能在降噪的同时,有效地提取桥梁结构信息,可用于实际桥梁结构的动力测试分析中。  相似文献   

16.
Thomson MJ  Liu J  Taghizadeh MR 《Applied optics》2004,43(10):1996-1999
We present a design method based on the Gerchberg-Saxton algorithm for the design of high-performance diffractive optical elements. Results from this algorithm are compared with results from simulated annealing and the iterative Fourier-transform algorithm. The element performance is comparable with those designed by simulated annealing, whereas the design time is similar to the iterative Fourier-transform method. Finally, we present results for a demanding beam-shaping task that was beyond the capabilities of either of the traditional algorithms. The element performances demonstrate greater than 85% efficiency and less than 2% uniformity error.  相似文献   

17.
7 This paper elucidates the computation of optimal controls for steel annealing processes as hybrid systems which comprise of one or more furnaces integrated with plant-wide planning and scheduling operations. A class of hybrid system is considered to capture the trade-off between metallurgical quality requirement and timely product delivery. Various optimization algorithms including particle swarm optimization algorithm (PSO) with time varying inertia weight methods, PSO with globally and locally tuned parameters (GLBest PSO), parameter free PSO (pf-PSO) and PSO like algorithm via extrapolation (ePSO), real coded genetic algorithm (RCGA) and two-phase hybrid real coded genetic algorithm (HRCGA) are considered to solve the optimal control problems for the steel annealing processes (SAP). The optimal solutions including optimal line speed, optimal cost, and job completion time and convergence rate obtained through all these optimization algorithms are compared with each other and also those obtained via the existing method, forward algorithm (FA). Various statistical analyses and analysis of variance (ANOVA) test and hypothesis t-test are carried out in order to compare the performance of each method in solving the optimal control problems of SAP. The comparative study of the performance of the various algorithms indicates that the PSO like algorithms, pf-PSO and ePSO are equally good and are also better than all the other optimization methods considered in this chapter.  相似文献   

18.
飞机颤振试飞试验信号的广义时频滤波   总被引:1,自引:0,他引:1  
唐炜  史忠科 《振动与冲击》2007,26(11):50-53,63
针对飞机颤振试飞试验信号噪声过大的问题,提出了一种广义时频域滤波算法。算法采用分数阶傅里叶变换对线性扫频激励及其响应信号进行广义时频分析,利用该类信号在分数阶傅里叶域内的聚焦特性,有效提取真实响应信号,达到信噪分离的目的。给出了具体的滤波算法,并将其应用于仿真算例和实际试飞数据,结果表明该方法可显著提高信号的信噪比。  相似文献   

19.
Degradation testing is an effective tool for evaluating the reliability of highly reliable products. There have been many data collection methods proposed in the literature. Some of these assumed that only degradation values are recorded, and some assumed failure times to be available. However, most research has been devoted to proposing parameter estimates or to designing degradation tests for a specific sampling method. The differences between these commonly used methods have rarely been investigated. The lack of comparisons between different sampling methods has made it difficult to select an appropriate means by which to collect data. In addition, it remains unclear whether obtaining extra information (eg, exact failure times) is useful for making statistical inferences. In this paper, we assume that the degradation path of a product follows a Wiener degradation process, and we summarize several data collection methods. Maximum likelihood estimates for parameters and their variance‐covariance matrices are derived for each type of data. Several commonly used optimization criteria for designing a degradation test are used to compare estimation efficiency. Sufficient conditions under which one method could be better than the others are proposed. Upper bounds of estimation efficiency are also investigated. Our results provide useful guidelines by which to choose a sampling method, as well as its design variables, to obtain efficient estimation. A simulated example based on real light‐emitting diodes data is studied to verify our theoretical results under a moderate sample size scenario.  相似文献   

20.
In this paper we consider the selection and scheduling of several jobs on a single machine with sequence-dependent setup times and strictly enforced time-window constraints on the start times of each job. We demonstrate how to develop network-based algorithms to sustain the desired work in process (WIP) profile in a manufacturing environment. Short-term production targets are used to coordinate decentralised local schedulers and to make the objectives of specific areas in line with the chain objectives. A wide range of test problems with two different network structures are simulated. The effectiveness, efficiency, and robustness of the proposed algorithms are analysed and compared with an exhaustive search approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号