首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper is concerned with explicit integration formulae for computing integrals of n-variate polynomials over linear polyhedra in n-dimensional space n. Two different approaches are discussed; the first set of formulae is obtained by mapping the polyhedron in n-dimensional space n into a standard n-simplex in n, while the second set of formulae is obtained by reducing the n-dimensional integral to a sum of n − 1 dimensional integrals which are n + 1 in number. These formulae are followed by an application example for which we have explained the detailed computational scheme. The symbolic integration formulae presented in this paper may lead to an easy and systematic incorporation of global properties of solid objects, such as, for example, volume, centre of mass, moments of inertia etc., required in engineering design problems.  相似文献   

2.
In the control of robot manipulators, it is customary to assume that the eigenvalues of the inertia matrix are uniformly bounded from below and above. However, in this article it is shown that not all manipulators possess this property. The class of serial robot manipulators with bounded inertia matrix, referred to as class ℬ︁𝒟 manipulators, is completely characterized and it is shown that it includes manipulators with nontrivial joint configurations. For manipulators of this class, easily computable uniform bounds for the minimum and maximum eigenvalues of the inertia matrix are provided. © 1998 John Wiley & Sons, Inc. 15: 17–28, 1998  相似文献   

3.
In boundary element methods, the evaluation of the weakly singular integrals can be performed either a) numerically, b) symbolically, i.e., by explicit expressions, or c) in a combined manner. The explicit integration is of particular interest, when the integrals contain the singularity or if the singularity is rather close to the integration domain. In this paper we describe the explicit expressions for the sixfold volume integrals arising for the Newton potential, i.e., for a 1/r integrand. The volume elements are axi-parallel bricks. The sixfold integrals are typical for the Galerkin method. However, the threefold integral arising from collocation methods can be derived in the same way. Received April 18, 2001; revised September 17, 2001 Published online April 25, 2002  相似文献   

4.
Recursive analysis, the theory of computation of functions on real numbers, has been studied from various aspects. We investigate the computational complexity of real functions using the methods of recursive function theory. Partial recursive real functions are defined and their domains are characterized as the recursively open sets. We define the time complexity of recursive real continuous functions and show that the time complexity and the modulus of uniform continuity of a function are closely related. We study the complexity of the roots and the differentiability of polynomial time computable real functions. In particular, a polynomial time computable real function may have a root of arbitrarily high complexity and may be nowhere differentiable. The concepts of the space complexity and nondeterministic computation are used to study the complexity of the integrals and the maximum values of real functions. These problems are shown to be related to the “P=?NP” and the “P=?PSPACE” questions.  相似文献   

5.
In the construction of numerical methods for solving stochastic differential equations it becomes necessary to calculate the expectation of products of multiple stochastic integrals. Well-known recursive relationships between these multiple integrals make it possible to express any product of them as a linear combination of integrals of the same type. This article describes how, exploiting the symbolic character of Mathematica, main recursive properties and rules of Itô and Stratonovich multiple integrals can be implemented. From here, a routine that calculates the expectation of any polynomial in multiple stochastic integrals is obtained. In addition, some new relations between integrals, found with the aid of the program, are shown and proved.  相似文献   

6.
《Computers & Structures》2002,80(3-4):339-347
The main objective of this paper is to present a general three-dimensional boundary element methodology for solving transient dynamic elastoplastic problems. The elastostatic fundamental solution is used in writing the integral representation and this creates in addition to the surface integrals, volume integrals due to inertia and inelasticity. Thus, an interior discretization in addition to the usual surface discretization is necessary. Isoparametric linear quadrilateral elements are used for the surface discretization and isoparametric linear hexahedra for the interior discretization. Advanced numerical integration techniques for singular and nearly singular integrals are employed. Houbolt's step-by-step numerical time integration algorithm is used to provide the dynamic response. Numerical examples are presented to illustrate the method and demonstrate its accuracy.  相似文献   

7.
The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented performance over classical machine learning algorithms. However, DL models are black-box methods hard to debug, interpret, and certify. DL alone cannot provide explanations that can be validated by a non technical audience such as end-users or domain experts. In contrast, symbolic AI systems that convert concepts into rules or symbols – such as knowledge graphs – are easier to explain. However, they present lower generalization and scaling capabilities. A very important challenge is to fuse DL representations with expert knowledge. One way to address this challenge, as well as the performance-explainability trade-off is by leveraging the best of both streams without obviating domain expert knowledge. In this paper, we tackle such problem by considering the symbolic knowledge is expressed in form of a domain expert knowledge graph. We present the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep representations, together with an explainability metric to assess the level of alignment of machine and human expert explanations. The ultimate objective is to fuse DL representations with expert domain knowledge during the learning process so it serves as a sound basis for explainability. In particular, X-NeSyL methodology involves the concrete use of two notions of explanation, both at inference and training time respectively: (1) EXPLANet: Expert-aligned eXplainable Part-based cLAssifier NETwork Architecture, a compositional convolutional neural network that makes use of symbolic representations, and (2) SHAP-Backprop, an explainable AI-informed training procedure that corrects and guides the DL process to align with such symbolic representations in form of knowledge graphs. We showcase X-NeSyL methodology using MonuMAI dataset for monument facade image classification, and demonstrate that with our approach, it is possible to improve explainability at the same time as performance.  相似文献   

8.
Positive and negative inertial penalties are used to impose constraints in time domain computational dynamics. Mathematical proofs are given to show that, asymptotically, positive and negative inertial penalties render the same solution. It is also proven that the constraint approximations calculated using positive and negative inertial penalties bound the constrained solution. Based on these observations, algorithms can be developed that improve the accuracy of restraint imposition in time domain computational dynamics. In particular, a scheme based on alternating signs and a scheme based on linear interpolation are discussed, both of which are effective in reducing the error due to the use of penalties. Finally, it is demonstrated that inertia penalties tend to increase the critical time step in conditionally stable time integration schemes, which is an important advantage over the conventional stiffness-type penalties.  相似文献   

9.
We introduce a computable framework for Lebesgue’s measure and integration theory in the spirit of domain theory. For an effectively given second countable locally compact Hausdorff space and an effectively given finite Borel measure on the space, we define a recursive measurable set, which extends the corresponding notion due to S?anin for the Lebesgue measure on the real line. We also introduce the stronger notion of a computable measurable set, where a measurable set is approximated from inside and outside by sequences of closed and open subsets, respectively. The more refined property of computable measurable sets give rise to the idea of partial measurable subsets, which naturally form a domain for measurable subsets. We then introduce interval-valued measurable functions and develop the notion of recursive and computable measurable functions using interval-valued simple functions. This leads us to the interval versions of the main results in classical measure theory. The Lebesgue integral is shown to be a continuous operator on the domain of interval-valued measurable functions and the interval-valued Lebesgue integral provides a computable framework for integration.  相似文献   

10.
The numerical integration of all singular surface integrals arising in 3-d boundary element methods is analyzed theoretically and computationally. For all weakly singular integrals arising in BEM, Duffy's triangular or local polar coordinates in conjunction with tensor product Gaussian quadrature are efficient and reliable for bothh-andp-boundary elements. Cauchy- and hypersingular surface integrals are reduced to weakly singular ones by analytic regularization which is done automatically by symbolic manipulation.  相似文献   

11.
To simulate solid dynamics, we must compute the mass, the center of mass, and the products of inertia about the axes of the body of interest. These mass property computations must be continuously repeated for certain simulations with rigid bodies or as the shape of the body changes. We introduce a GPU-friendly algorithm to approximate the mass properties for an arbitrarily shaped body. Our algorithm converts the necessary volume integrals into surface integrals on a projected plane. It then maps the plane into a framebuffer in order to perform the surface integrals rapidly on the GPU. To deal with non-convex shapes, we use a depth-peeling algorithm. Our approach is image-based; hence, it is not restricted by the mathematical or geometric representation of the body, which means that it can efficiently compute the mass properties of any object that can be rendered on the graphics hardware. We compare the speed and accuracy of our algorithm with an analytic algorithm, and demonstrate it in a hydrostatic buoyancy simulation for real-time applications, such as interactive games.  相似文献   

12.
Recent developments of multidimensional solvers using separated representation make it possible to account for the multidimensionality of mechanical models in materials science when doing numerical simulations. This paper aims to extend the separated representation to inseparable equations using an efficient integration scheme. It focuses on the dependence of constitutive equations on material coefficients. Although these coefficients can be optimized using few experimental results, they are not very well known because of the natural variability of material properties. Therefore, the mechanical state can be viewed as a function depending not only on time and space variables but also on material coefficients. This is illustrated in this paper by a sensitivity analysis of the response of a sintering model with respect to variations of material coefficients. The considered variations are defined around an optimized value of coefficients adjusted by experimental results. The proposed method is an incremental method using an extension of the integration scheme developed for the Hyper Reduction method. During the incremental solution, before the adaptation of the representation, an assumed separation representation is used as a reduced-order model. We claim that a truncated integration scheme enables to forecast the reduced-state variables related to the assumed separated representation. The fact that the integrals involved in the formulation can not be written as a sum of products of one-dimensional integrals, this approach reduces the extent of the integration domain.  相似文献   

13.
Realizability interpretations of logics are given by saying what it means for computational objects of some kind to realize logical formulae. The computational objects in question might be drawn from an untyped universe of computation, such as a partial combinatory algebra, or they might be typed objects such as terms of a PCF-style programming language. In some instances, one can show that a particular untyped realizability interpretation matches a particular typed one, in the sense that they give the same set of realizable formulae. In this case, we have a very good fit indeed between the typed language and the untyped realizability model — we refer to this condition as (constructive) logical full abstraction.We give some examples of this situation for a variety of extensions of PCF. Of particular interest are some models that are logically fully abstract for typed languages including non-functional features. Our results establish connections between what is computable in various programming languages and what is true inside various realizability toposes. We consider some examples of logical formulae to illustrate these ideas, in particular their application to exact real-number computability.  相似文献   

14.
A method is proposed by which elliptic integrals can be integrated symbolically without information regarding limits of integration and branch points of the integrand that is required in integral tables using Legendre’s integrals. However, it is assumed that when all polynomials in the integrand have been factored symbolically into linear factors, the exponents of all distinct linear factors are known. The recurrence relations are one-parameter relations, all formulas are given explicitly, and the integral is eventually expressed in terms of canonicalR-functions, with no increase in their number if neither limit of integration is a branch point of the integrand. It is the use ofR-functions rather than Legendre’s integrals that makes it possible to carry out the whole process symbolically. If (possibly complex) numerical values of the symbols are known, there are published algorithms for numerical computation of theR-functions.  相似文献   

15.
In some applications of triangulation, such as finite-element mesh generation, the aim is to triangulate a given domain, not just a set of points. One approach to meeting this requirement, while maintaining the desirable properties of Delaunay triangulation, has been to enforce the empty circumcircle property of Delaunay triangulation, subject to the additional constraint that the edges of a polygon be covered by edges of the triangulation. In finite-element mesh generation it is usually necessary to include additional points besides the vertices of the domain boundary. This motivates us to ask whether it is possible to trinagulate a domain by introducing additional points in such a manner that the Delaunay triangulation of the points includes the edges of the domain boundary. We present algorithms that given a multiply connected polygonal domain withN vertices, placeK additional points on the boundary inO(N logN + K) time such that the polygon is covered by the edges of the Delaunay triangulation of theN + K points. Furthermore,K is the minimum number of additional points such that a circle, passing through the endpoints of each boundary edge segment, exists that does not contain in its interior any other part of the domain boundary. We also show that by adding only one more point per edge, certain degeneracies that may otherwise arise can be avoided.  相似文献   

16.
Applicative theories form the basis of Feferman’s systems of explicit mathematics, which have been introduced in the 1970s. In an applicative universe, all individuals may be thought of as operations, which can freely be applied to each other: self-application is meaningful, but not necessarily total. It has turned out that theories with self-application provide a natural setting for studying notions of abstract computability, especially from a proof-theoretic perspective. This paper is concerned with the study of (unramified) bounded applicative theories which have a strong relationship to classes of computational complexity. We propose new applicative systems whose provably total functions coincide with the functions computable in polynomial time, polynomial space, polynomial time and linear space, as well as linear space. Our theories can be regarded as applicative analogues of traditional systems of bounded arithmetic. We are also interested in higher-type features of our systems; in particular, it is shown that Cook and Urquhart’s system is directly contained in a natural applicative theory of polynomial strength.  相似文献   

17.
In the general machine learning domain, solutions based on the integration of deep learning models with knowledge-based approaches are emerging. Indeed, such hybrid systems have the advantage of improving the recognition rate and the model’s interpretability. At the same time, they require a significantly reduced amount of labeled data to reliably train the model. However, these techniques have been poorly explored in the sensor-based Human Activity Recognition (HAR) domain. The common-sense knowledge about activity execution can potentially improve purely data-driven approaches. While a few knowledge infusion approaches have been proposed for HAR, they rely on rigid logic formalisms that do not take into account uncertainty. In this paper, we propose P-NIMBUS, a novel knowledge infusion approach for sensor-based HAR that relies on probabilistic reasoning. A probabilistic ontology is in charge of computing symbolic features that are combined with the features automatically extracted by a CNN model from raw sensor data and high-level context data. In particular, the symbolic features encode probabilistic common-sense knowledge about the activities consistent with the user’s surrounding context. These features are infused within the model before the classification layer. We experimentally evaluated P-NIMBUS on a HAR dataset of mobile devices sensor data that includes 14 different activities performed by 25 users. Our results show that P-NIMBUS outperforms state-of-the-art neuro-symbolic approaches, with the advantage of requiring a limited amount of training data to reach satisfying recognition rates (i.e., more than 80% of F1-score with only 20% of labeled data).  相似文献   

18.
The implementation of the symmetric Galerkin boundary element method (SGBEM) involves extensive work on the evaluation of various integrals, ranging from regular integrals to hypersingular integrals. In this paper, the treatments of weak singular integrals in the time domain are reviewed, and analytical evaluations for the spatial double integrals which contain weak singular terms are derived. A special scheme on the allocation of Gaussian integration points for regular double integrals in the SGBEM is developed to improve the efficiency of the Gauss–Legendre rule. The proposed approach is implemented for the two-dimensional elastodynamic problems, and two numerical examples are presented to verify the accuracy of the numerical implementation.  相似文献   

19.
A boundary element method, called the charge simulation method, is presented for analysis of anisotropic thin-plate bending problems. In this method the singular integrals involved in the other boundary element methods are eliminated and there is no numerical integration involved. Further, the domain integral is replaced by a polynomial particular integral; hence the domain discretization is avoided. This method is conceptually very simple. The results obtained by this method are compared with the available analytical solutions for various anisotropic and symmetric laminates and the results are in good agreement.  相似文献   

20.
S. Börm  W. Hackbusch 《Computing》2005,74(2):75-100
We introduce a method for the computation of singular integrals arising in the discretization of integral equations. The basic method is based on the concept of admissible subdomains, known, e.g., from panel clustering techniques and -matrices: We split the domain of integration into a hierarchy of subdomains and perform standard quadrature on those subdomains that are amenable to it. By using additional properties of the integrand, we can significantly reduce the algorithmic complexity of our approach. The method works also well for hypersingular integrals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号