首页 | 本学科首页   官方微博 | 高级检索  
     


Total pressure losses minimization in turbomachinery cascades using the exact Hessian
Authors:T. Zervogiannis  D.I. Papadimitriou  K.C. Giannakoglou
Affiliation:1. Experimental Cancer Centre, Barts Cancer Institute, Queen Mary, University of London, St Bartholomew''s Hospital, London;2. Experimental Cancer Centre, University College Hospital Experimental Cancer Medicine Centre, London;3. Edinburgh Urological Cancer Group, Division of Pathology, Institute of Genetics and Molecular Medicine, University of Edinburgh, Edinburgh;4. Department of Uro-Oncology, Guys and St Thomas''s Hospital, London;5. Department of Medical Oncology, The Royal Free Hospital, London;6. Tumour Biology Team, Breakthrough Breast Cancer Research Centre, The Institute of Cancer Research, London;7. Department of Medical Oncology, The Beatson Cancer Centre, Glasgow, UK;8. Department of Surgery and Medical Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
Abstract:A method for the design of turbomachinery cascades with minimum total pressure losses, subject to constraints on the minimum blade thickness and flow turning, is presented. It is based on the Newton–Lagrange method which requires the computation of first- and second-order sensitivities of the objective function and the constraints, with respect to the design variables. The computation of the exact Hessian of the function which expresses the difference in total pressure between the inlet to and the outlet from the cascade, is new in the literature. To compute the Hessian, the direct differentiation of the viscous flow equations is used for the first-order sensitivities of the functional and the flow-related constraints, followed by the discrete adjoint method. Since the objective function is defined along boundaries other than those controlled by the design variables, it is challenging to investigate the significance of all terms comprising the exact second-order sensitivity expressions. All these terms were temporarily computed using automatic differentiation and those which proved to be significant are hand-differentiated to minimize CPU cost and memory requirements. Insignificant terms are eliminated, giving rise to the so-called “exact” Hessian matrix. An “exactly” initialized quasi-Newton method was also programmed and tested. In the latter, at the first cycle, the exact gradients and Hessians are computed and used; during the subsequent optimization cycles, the discrete adjoint method provides the exact gradient whereas the Hessian is updated as in quasi-Newton methods. The comparison of the efficiency of the aforementioned methods depends on the number of design variables used; the “exactly” initialized quasi-Newton method constantly outperforms its conventional variant in terms of CPU cost, particularly in non-convex and/or constrained optimization problems.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号