首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, several boundary element regularization methods, such as iterative, conjugate gradient, Tikhonov regularization and singular value decomposition methods, for solving the Cauchy problem associated to the Helmholtz equation are developed and compared. Regularizing stopping criteria are developed and the convergence, as well as the stability, of the numerical methods proposed are analysed. The Cauchy problem for the Helmholtz equation can be regularized by various methods, such as the general regularization methods presented in this paper, but more accurate results are obtained by classical methods, such as the singular value decomposition and the Tikhonov regularization methods. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
Multiplier methods used to solve the constrained engineering optimization problem are described. These methods solve the problem by minimizing a sequence of unconstrained problems defined using the cost and constraint functions. The methods, proposed in 1969, have been determined to be quite robust, although not as efficient as other algorithms. They can be more effective for some engineering applications, such as optimum design and control of large scale dynamic systems. Since 1969 several modifications and extensions of the methods have been developed. Therefore, it is important to review the theory and computational procedures of these methods so that more efficient and effective ones can be developed for engineering applications. Recent methods that are similar to the multiplier methods are also discussed. These are continuous multiplier update, exact penalty and exponential penalty methods.  相似文献   

3.
Recently, stable meshfree methods for computational fluid mechanics have attracted rising interest. So far such methods mostly resort to similar strategies as already used for stabilized finite element formulations. In this study, we introduce an information theoretical interpretation of Petrov–Galerkin methods and Green’s functions. As a consequence of such an interpretation, we establish a new class of methods, the so-called information flux methods. These schemes may not be considered as stabilized methods, but rather as methods which are stable by their very nature. Using the example of convection–diffusion problems, we demonstrate these methods’ excellent stability and accuracy, both in one and higher dimensions.  相似文献   

4.
The solubility of drugs is a crucial physicochemical property in the drug discovery or development process and for improving the bioavailability of drugs. There are various methods for evaluating the solubility of drugs including manual measurement methods, mathematical methods, and smart methods. Manual measurement and mathematical methods have some defects which make the smart systems more reliable and important in this field. In this review, various instruments used for the solubility determination, along with the smart systems, have been discussed. Mechanism and applications of each method have been elaborated in detail. Moreover, unique characteristics as well as some limitations of discussed methods are also described.  相似文献   

5.
In many cases, boundary integral equations contain a domain integral. This can be evaluated by discretization of the domain into domain elements. Historically, this was seen as going against the spirit of boundary element methods, and several methods were developed to avoid this discretization, notably dual and multiple reciprocity methods and particular solution methods. These involved the representation of the interior function with a set of basis functions, generally of the radial type. In this study, meshless methods (dual reciprocity and particular solution) are compared to the direct domain integration methods. The domain integrals are evaluated using traditional methods and also with multipole acceleration. It is found that the direct integration always results in better accuracy, as well as smaller computation times. In addition, the multipole method further improves on the computation times, in particular where multiple evaluations of the integral are required, as when iterative solvers are used. The additional error produced by the multipole acceleration is negligible. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

6.
Identifying and ranking high pedestrian crash zones plays a key role in developing efficient and effective strategies to enhance pedestrian safety. This paper presents (1) a Geographical Information Systems (GIS) methodology to study the spatial patterns of pedestrian crashes in order to identify high pedestrian crash zones, and (2) an evaluation of methods to rank these high pedestrian crash zones. The GIS based methodology to identify high pedestrian crash zones includes geocoding crash data, creating crash concentration maps, and then identifying high pedestrian crash zones. Two methods generally used to create crash concentration maps based on density values are the Simple Method and the Kernel Method. Ranking methods such as crash frequency, crash density, and crash rate, as well as composite methods such as the sum-of-the-ranks and the crash score methods are used to rank the selected high pedestrian crash zones. The use of this methodology and ranking methods for high pedestrian crash zones are illustrated using the Las Vegas metropolitan area as the study area. Crash data collected for a 5-year period (1998-2002) were address matched using the street name/reference street name intersection location reference system. A crash concentration map was then created using the Kernel Method as it facilitates the creation of a smooth density surface when compared to the Simple Method. Twenty-two linear high crash zones and seven circular high crash zones were then identified. The GIS based methodology reduced the subjectivity in the analysis process. Results obtained from the evaluation of methods to rank high pedestrian crash zones show a significant variation in ranking when individual methods were considered. However, rankings of high pedestrian crash zones were relatively consistent with little to no variation when the sum-of-the-ranks method and the crash score method were used. Thus, these composite methods are recommended for use in ranking high pedestrian crash zones instead of individual methods.  相似文献   

7.
We develop atask-structure for design problem-solving. The task-structure of a complex problemsolving activity such as design is a hierarchical organization of subtasks. For each task in the task-structure, we can then proceed to investigate whatmethods may be available, and what knowledge and inference requirements each of these methods have. Some of the methods may be domain-specific, some of them more generic in character, some may involve traditional computational techniques, and some others may involve searching in a problem space for solutions to the task. However, this systematic process of identifying tasks, methods, and subtasks will help us to see how design as a general problem is solved not by one method or technique but by an opportunistic exploitation of whatever methods are available (i.e., theknowledge required for using a method is available) to help accomplish a subtask. Thus, in principle, very different methods and knowledge can be brought into play in as flexible a way as applicable. For design problem-solving, we provide such an analysis for a family of design methods that we callpropose-verify-critique-modify methods. We end with remarks about how these methods can be flexibly integrated in a control structure that matches the subtasks with methods for which knowledge is available.  相似文献   

8.
Measurements of the diffuse spectral reflectance are usually not made as direct measurement of the incident and the reflected radiant flux but rather as measurements relative to a standard of known reflectance value.For the calibration of such standards, different methods have been described in the literature:
  1. Goniophotometric methods, also called Indicatrix methods or point-by-point methods.
  2. Methods based on the Kubelka-Munk theory.
  3. Integrating sphere methods according to Taylor, Benford, Sharp-Little, van den Akker, Korte.
Various materials such as magnesium oxide, barium sulfate or opal glass are being used as standards. Their suitability as transfer or as working standards will be discussed.The results of comparative measurements between some of these methods will be given.  相似文献   

9.
雷管输出威力的非动态测量法   总被引:1,自引:1,他引:0  
陈西武  周彬 《爆破器材》1998,27(2):21-23
文章对测定雷管输出威力的非动态方法进行了评述,分析了它们的特点,应用范围及发展趋势,提出了应进行标准炸药装药标准化,雷管-惰性介质界面能量传递规律的研究及根据实际需要选择合适测试方法等看法。  相似文献   

10.
爆破振动的分析方法及测试仪器系统探讨   总被引:8,自引:4,他引:4  
李彬峰 《爆破》2003,20(1):81-84
讨论了对爆破振动的有关测试,分析方法和几种对爆破振动的不同分析处理软件。并详细阐述了几种常用处理方法的性能特点。从而达到指导实际测试的目的,提高数据处理的正确性和科学性。  相似文献   

11.
目的 对植物多酚常见的定性定量分析方法进行综述,为其在食品包装和贮藏领域的资源化利用提供方法和实验支撑.方法 从实验原理、实际操作、适用范围和优缺点等方面对纸色谱法、薄层色谱法、显色法等常见的定性分析方法,以及高锰酸钾滴定法、酒石酸亚铁法、普鲁士蓝法、福林酚法、香草醛-盐酸法、正丁醇-盐酸法、近红外光谱法和原子吸收光谱法等常用的定量分析方法进行介绍和对比分析,对定量分析方法和测定标准品的选择进行进一步的介绍.结果 植物多酚常见的定性定量分析方法的原理和繁简程度各不相同,测试结果有较大差异,各有优缺点;在测定中,通常采取几种不同的分析方法对同一样品中的植物多酚含量进行定量测定,从而对样品中所含的不同结构和种类植物多酚的含量得到一个综合的表征与认识;在定量分析中,通常采用与样品中多酚类物质属于同一类的酚类化合物纯物质作为标准品进行测定.结论 研究者需要根据各自的研究目的和样品的特点,选择几种合适的方法构建一套合理的分析方案对植物多酚进行定性定量分析,从而全面地反映样品中植物多酚的类型和含量.  相似文献   

12.
Various methods for direct and indirect determination of LLD and CMOD were used to determine J from SENB specimens in three different steels. The influence of the displacement measurement on J is discussed, and shows that the values of J using LLD determined from clip gauge methods to the ASTM E1820 or ISO 12135 standards are consistent with values of J determined from CMOD (either directly or using clip gauge methods), as defined in ASTM E1820. From this work it is recommended that standard methods such as ISO 12135 should permit load‐CMOD and load‐LLD as alternative methods to determine J. Methods to determine LLD by corrections to the ram displacement were also shown to be effective in determining J, for applications where the use of clip gauges may be challenging, such as fracture toughness testing in sour environments, dynamic tests, or testing at very high temperature.  相似文献   

13.
In this study preprocessing of Raman spectra of different biological samples has been studied, and their effect on the ability to extract robust and quantitative information has been evaluated. Four data sets of Raman spectra were chosen in order to cover different aspects of biological Raman spectra, and the samples constituted salmon oils, juice samples, salmon meat, and mixtures of fat, protein, and water. A range of frequently used preprocessing methods, as well as combinations of different methods, was evaluated. Different aspects of regression results obtained from partial least squares regression (PLSR) were used as indicators for comparing the effect of different preprocessing methods. The results, as expected, suggest that baseline correction methods should be performed in advance of normalization methods. By performing total intensity normalization after adequate baseline correction, robust calibration models were obtained for all data sets. Combination methods like standard normal variate (SNV), multiplicative signal correction (MSC), and extended multiplicative signal correction (EMSC) in their basic form were not able to handle the baseline features present in several of the data sets, and these methods thus provide no additional benefits compared to the approach of baseline correction in advance of total intensity normalization. EMSC provides additional possibilities that require further investigation.  相似文献   

14.
Deposition of decorative coatings using PVD/CVD methods Decorative surface treatment of consumer goods has a cross‐sectional function, because of its many application in different industries. Beneath the classical methods as lacquering and electrochemical graftage, vacuum‐based methods – named with the acronym PVD and CVD – have definitely established itself up to now. For the last two methods, the future market will have a good growth potential in the field of decorative surface treatment. One reason for this is, that nowadays the product differentiation is very important for the marketing and this takes place more and more via “design” and to a lesser extent via the functionality of a product. The following article gives an overview about the possibilities of decorative coating deposition using PVD/CVD methods. Attention is paid on colour mechanisms, colour measurement as well as on resulting colours.  相似文献   

15.
Meshless methods have some advantages over their counterparts such as the finite-element method (FEM). However, existing meshless methods for computational electromagnetic fields are still not as efficient as FEM. In this paper, we compare two meshless methods of discretizing the computational domain of Poisson-like problems; namely, the point collocation and Galerkin methods (which use the strong and weak forms of the governing equation respectively), and their effects on the computational accuracy and efficiency of the magnetic fields. We also discuss methods of handling discontinuities at the material interface. We present several examples, which also provide a means to validate and evaluate both meshless methods. Exact solutions and/or FEM are used as a basis for comparison. In addition, we also verify the results by comparing computed magnetic forces against those measured experimentally.  相似文献   

16.
Solder joints are often the cause of failure in electronic devices, failing due to cyclic creep induced ductile fatigue. This paper will review the modelling methods available to predict the lifetime of SnPb and SnAgCu solder joints under thermo‐mechanical cycling conditions such as power cycling, accelerated thermal cycling and isothermal testing, the methods do not apply to other damage mechanisms such as vibration or drop‐testing. Analytical methods such as recommended by the IPC are covered, which are simple to use but limited in capability. Finite element modelling methods are reviewed, along with the necessary constitutive laws and fatigue laws for solder, these offer the most accurate predictions at the current time. Research on state‐of‐the‐art damage mechanics methods is also presented, although these have not undergone enough experimental validation to be recommended at present.  相似文献   

17.
贺孝梅  杨颖 《包装工程》2023,44(6):60-73, 116
目的 概述国内外无障碍设计的发展状况,理清国内外关于无障碍设计的相关方法与理论模型,探讨未来无障碍设计方法发展的趋势,为研究者提供相关设计思路与发展动向。方法 通过对国内外相关文献的研究,梳理了无障碍设计的概念与发展现状,归纳了当前无障碍设计中包括适应性设计、参与式设计、基于感官信息的设计、情感化设计和基于QFD理论与TRIZ理论设计的5种主要方法与理论模型。结论 目前无障碍设计方法的用户参与度低,很难满足用户的不同需求,且缺乏具体针对性的完整理论体系与设计方法。未来在进行无障碍设计时,可结合体验与交互等多种设计理念与方法,探索创新无障碍设计方法,以便更好地从用户需求出发设计能帮助残障群体更便利生活的无障碍产品。  相似文献   

18.
薛明富  胡爱群 《声学技术》2011,30(3):259-269
虽然非线性助听器选配算法相继被提出,但对各选配算法的研究并不充分.对常用的NAL-NL1和DSL[i/o]5.0等选配算法进行了研究与仿真.首先研究了各选配算法的设计原则,并仿真分析了在各种典型听力图下,各算法所给出的声学响应特性(增益、拐点、压缩比、各频段的强调程度等),计算了客观的评价指标(响度、语音清晰度等).实...  相似文献   

19.
Effective methods leading to automated, computer-based solution of complex engineering design problems are studied in this paper. In particular, methods of automation of the finite element analyses are of primary interest here. These include algorithmic approaches, based on error estimation, adaptivity and smart algorithms, as well as heuristic approaches based on methods of knowledge engineering. A computational environment, which interactively couples h-p adaptive finite element methods with object-oriented programming and expert system tools, is presented. Several examples illustrate the merit and potential of the approaches studied here.  相似文献   

20.
对照剩余电压的形成和国家标准规定的测试要求,叙述了剩余电压存在形式、特点、测试方法和要求及测试设备。根据现有不同剩余电压测试方法和测试设备的性能,对照现行国家标准测试规定,列出了当前剩余电压测试过程出现的测试阻抗、测试方法和测试设备等问题。针对这些测试问题,提出了解决的具体测试变通方法和设想。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号