首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A Maple procedure is described by means of which an algebraic function given by an equation f(x y) = 0 can be expanded into a fractional power series (Puiseux series)
where
,
of special (nice) type. It may be a series with polynomial, rational, hypergeometric coefficients, or m-sparse or m-sparse m-hypergeometric series. First, a linear ordinary differential equation with polynomial coefficients Ly(x) = 0 is constructed which is satisfied by the given algebraic function. The , n 0, and a required number of initial coefficients 0, ..., are computed by using Maple algcurves package. By means of Maple Slode package, a solution to the equation Ly(x) = 0 is constructed in the form of a series with nice coefficients, the initial coefficients of which correspond to the calculated 0, ..., . The procedure suggested can construct an expansion at a user-given point x 0, as well as determine points where an expansion of such a special type is possible.  相似文献   

2.
For the equation x(t) = x(t) (1-(1/) t-- t- x(u)du), > 0, > 0, > 0, conditions for the stability of a nonzero stationary solution under small perturbations are determined.  相似文献   

3.
The results of application of potential theory to optimization are used to extend the use of (Helmholtz) diffusion and diffraction equations for optimization of their solutions (x, ) with respect to both x, and . If the aim function is modified such that the optimal point does not change, then the function (x, ) is convex in (x, for small . The possibility of using heat conductivity equation with a simple boundary layer for global optimization is investigated. A method is designed for making the solution U(x,t) of such equations to have a positive-definite matrix of second mixed derivatives with respect to x for any x in the optimization domain and any small t < 0 (the point is remote from the extremum) or a negative-definite matrix in x (the point is close to the extremum). For the functions (x, ) and U(x,t) having these properties, the gradient and the Newton–Kantorovich methods are used in the first and second stages of optimization, respectively.  相似文献   

4.
Summary We consider the design of a tree-multiplier, which is a modified version of a Wallace tree-multiplier [16] made suitable for VLSI design by Luk and Vuillemin [12]. It is shown that 4 log(n) + 3 test patterns suffice to exhaustively test the multiplier with respect to the cellular fault model (which includes tests for all single stuck at faults). Some slight modifications of the multiplier prove, that these tests can be applied without increasing the number of input ports substantially.  相似文献   

5.
White [6–8] has theoretically shown that learning procedures used in network training are inherently statistical in nature. This paper takes a small but pioneering experimental step towards learning about this statistical behaviour by showing that the results obtained are completely in line with White's theory. We show that, given two random vectorsX (input) andY (target), which follow a two-dimensional standard normal distribution, and fixed network complexity, the network's fitting ability definitely improves with increasing correlation coefficient rXY (0rXY1) betweenX andY. We also provide numerical examples which support that both increasing the network complexity and training for much longer do improve the network's performance. However, as we clearly demonstrate, these improvements are far from dramatic, except in the case rXY=+ 1. This is mainly due to the existence of a theoretical lower bound to the inherent conditional variance, as we both analytically and numerically show. Finally, the fitting ability of the network for a test set is illustrated with an example.Nomenclature X Generalr-dimensional random vector. In this work it is a one-dimensional normal vector and represents the input vector - Y Generalp-dimensional random vector. In this work it is a one-dimensional normal vector and represents the target vector - Z Generalr+p dimensional random vector. In this work it is a two-dimensional normal vector - E Expectation operator (Lebesgue integral) - g(X)=E(Y¦X) Conditional expectation - Experimental random error (defined by Eq. (2.1)) - y Realized target value - o Output value - f Network's output function. It is formally expressed asf: R r×WR p, whereW is the appropriate weight space - Average (or expected) performance function. It is defined by Eq. (2.2) as (w)=E[(Yf(X,w)],w W - Network's performance - w * Weight vector for optimal solution. That is, the objective of network is such that (w *) is minimum - C 1 Component one - C 2 Component two - Z Matrix of realised values of the random vectorZ overn observations - Z t Transformed matrix version ofZ in such a way thatX andY have values in [0,1] - X t ,Y t Transformed versions ofX andY and both are standard one-dimensional normal vectors - n h Number of hidden nodes (neurons) - r XY Correlation coefficient between eitherX andY orX t andY t - s and k in Eq. (3.1) and afterwards s is average value of 100 differentZ t matrices. k is the error function ofkthZ t , matrix. In Eq. (3.1), the summation is fromk=1 to 100, and in Eq. (3.2) fromi=1 ton. In Eq. (3.2)o ki andy ki are the output and target values for the kthZ t matrix and ith observation, respectively - 1/2(w *) and k (wn) k(wn) is the sample analogue of 1/2(w *) - Y 2 In Eq. (4.1) and afterwards, Y 2 is the variance ofY - Y 2 variance ofY t . In Sect. 4.3 the transformation isY t=a Y+b - Y max,Y min the maximum and minimum values ofY - R Correlation matrix ofX andY - Covariance matrix ofX andY - Membership symbol in set theory  相似文献   

6.
Summary Tsokos [12] showed the existence of a unique random solution of the random Volterra integral equation (*)x(t; ) = h(t; ) + o t k(t, ; )f(, x(; )) d, where , the supporting set of a probability measure space (,A, P). It was required thatf must satisfy a Lipschitz condition in a certain subset of a Banach space. By using an extension of Banach's contraction-mapping principle, it is shown here that a unique random solution of (*) exists whenf is (, )-uniformly locally Lipschitz in the same subset of the Banach space considered in [12].  相似文献   

7.
Optimal shape design problems for an elastic body made from physically nonlinear material are presented. Sensitivity analysis is done by differentiating the discrete equations of equilibrium. Numerical examples are included.Notation U ad set of admissible continuous design parameters - U h ad set of admissible discrete design parameters - function fromU h ad defining shape of body - h function fromU h ad defining approximated shape of body - vector of nodal values of h - { n} sequence of functions tending to - () domain defined by - K bulk modulus - shear modulus - penalty parameter for contact condition - V() space of virtual displacements in() - V h(h) finite element approximation ofV() - J cost functional - J h discretized cost functional - J algebraic form ofJ h - (u) stress tensor - e(u) strain tensor - K stiffness matrix - f force vector - b(q) term arising from nonlinear boundary conditions - q vector of nodal degrees of freedom - p vector of adjoint state variables - J Jacobian of isoparametric mapping - |J| determinant ofJ - N vector of shape function values on parent element - L matrix of shape function derivatives on parent element - G matrix of Cartesian derivatives of shape functions - X matrix of nodal coordinates of element - D matrix of elastic coefficients - B strain-displacement matrix - P part of boundary where tractions are prescribed - u part of boundary where displacements are prescribed - variable part of boundary - strain invariant  相似文献   

8.
We define an identity to be hypersatisfied by a variety V if, whenever the operation symbols of V, are replaced by arbitrary terms (of appropriate arity) in the operations of V, the resulting identity is satisfied by V in the usual sense. Whenever the identity is hypersatisfied by a variety V, we shall say that is a V hyperidentity. For example, the identity x + x y = x (x + y) is hypersatisfied by the variety L of all lattices. A proof of this consists of a case-by-case examination of { + , } {x, y, x y, x y}, the set of all binary lattice terms. In an earlier work, we exhibited a hyperbase L for the set of all binary lattice (or, equivalently, quasilattice) hyperidentities of type 2, 2. In this paper we provide a greatly refined hyperbase L . The proof that L is a hyperbase was obtained by using the automated reasoning program Otter 3.0.4.  相似文献   

9.
A review of the methods for global optimization reveals that most methods have been developed for unconstrained problems. They need to be extended to general constrained problems because most of the engineering applications have constraints. Some of the methods can be easily extended while others need further work. It is also possible to transform a constrained problem to an unconstrained one by using penalty or augmented Lagrangian methods and solve the problem that way. Some of the global optimization methods find all the local minimum points while others find only a few of them. In any case, all the methods require a very large number of calculations. Therefore, the computational effort to obtain a global solution is generally substantial. The methods for global optimization can be divided into two broad categories: deterministic and stochastic. Some deterministic methods are based on certain assumptions on the cost function that are not easy to check. These methods are not very useful since they are not applicable to general problems. Other deterministic methods are based on certain heuristics which may not lead to the true global solution. Several stochastic methods have been developed as some variation of the pure random search. Some methods are useful for only discrete optimization problems while others can be used for both discrete and continuous problems. Main characteristics of each method are identified and discussed. The selection of a method for a particular application depends on several attributes, such as types of design variables, whether or not all local minima are desired, and availability of gradients of all the functions.Notation Number of equality constraints - () T A transpose of a vector - A A hypercubic cell in clustering methods - Distance between two adjacent mesh points - Probability that a uniform sample of sizeN contains at least one point in a subsetA ofS - A(v, x) Aspiration level function - A The set of points with cost function values less thanf(x G * ) +. Same asA f () - A f () A set of points at which the cost function value is within off(x G * ) - A () A set of points x with[f(x)] smaller than - A N The set ofN random points - A q The set of sample points with the cost function value f q - Q The contraction coefficient; –1 Q 0 - R The expansion coefficient; E > 1 - R The reflection coefficient; 0 < R 1 - A x () A set of points that are within the distance from x G * - D Diagonal form of the Hessian matrix - det() Determinant of a matrix - d j A monotonic function of the number of failed local minimizations - d t Infinitesimal change in time - d x Infinitesimal change in design - A small positive constant - (t) A real function called the noise coefficient - 0 Initial value for(t) - exp() The exponential function - f (c) The record; smallest cost function value over X(C) - [f(x)] Functional for calculating the volume fraction of a subset - Second-order approximation tof(x) - f(x) The cost function - An estimate of the upper bound of global minimum - f E The cost function value at xE - f L The cost function value at xL - f opt The current best minimum function value - f P The cost function value at x P - f Q The cost function value at x Q - f q A function value used to reduce the random sample - f R The cost function value at x R - f S The cost function value at xS - f T F min A common minimum cost function value for several trajectories - f TF opt The best current minimum value found so far forf TF min - f W The cost function value at x W - G Minimum number of points in a cell (A) to be considered full - The gamma function - A factor used to scale the global optimum cost in the zooming method - Minimum distance assumed to exist between two local minimum points - gi(x) Constraints of the optimization problem - H The size of the tabu list - H(x*) The Hessian matrix of the cost function at x* - h j Half side length of a hypercube - h m Minimum half side lengths of hypercubes in one row - I The unity matrix - ILIM A limit on the number of trials before the temperature is reduced - J The set of active constraints - K Estimate of total number of local minima - k Iteration counter - The number of times a clustering algorithm is executed - L Lipschitz constant, defined in Section 2 - L The number of local searches performed - i The corresponding pole strengths - log () The natural logarithm - LS Local search procedure - M Number of local minimum points found inL searches - m Total number of constraints - m(t) Mass of a particle as a function of time - m() TheLebesgue measure of thea set - Average cost value for a number of random sample of points inS - N The number of sample points taken from a uniform random distribution - n Number of design variables - n(t) Nonconservative resistance forces - n c Number of cells;S is divided inton c cells - NT Number of trajectories - Pi (3.1415926) - P i (j) Hypersphere approximating thej-th cluster at stagei - p(x (i)) Boltzmann-Gibbs distribution; the probability of finding the system in a particular configuration - pg A parameter corresponding to each reduced sample point, defined in (36) - Q An orthogonal matrix used to diagonalize the Hessian matrix - i (i = 1, K) The relative size of thei-th region of attraction - r i (j) Radius of thej-th hypersp here at stagei - R x * Region of attraction of a local minimum x* - r j Radius of a hypersphere - r A critical distance; determines whether a point is linked to a cluster - R n A set ofn tuples of real numbers - A hyper rectangle set used to approximateS - S The constraint set - A user supplied parameter used to determiner - s The number of failed local minimizations - T The tabu list - t Time - T(x) The tunneling function - T c (x) The constrained tunneling function - T i The temperature of a system at a configurationi - TLIMIT A lower limit for the temperature - TR A factor between 0 and 1 used to reduce the temperature - u(x) A unimodal function - V(x) The set of all feasible moves at the current design - v(x) An oscillating small perturbation. - V(y(i)) Voronoi cell of the code point y(i) - v–1 An inverse move - v k A move; the change from previous to current designs - w(t) Ann-dimensional standard. Wiener process - x Design variable vector of dimensionn - x# A movable pole used in the tunneling method - x(0) A starting point for a local search procedure - X(c) A sequence of feasible points {x(1), x(2),,x(c)} - x(t) Design vector as a function of time - X* The set of all local minimum points - x* A local minimum point forf(x) - x*(i) Poles used in the tunneling method - x G * A global minimum point forf(x) - Transformed design space - The velocity vector of the particle as a function of time - Acceleration vector of the particle as a function of time - x C Centroid of the simplex excluding x L - x c A pole point used in the tunneling method - x E An expansion point of x R along the direction x C x R - x L The best point of a simplex - x P A new trial point - x Q A contraction point - x R A reflection point; reflection of x W on x C - x S The second worst point of a simplex - x W The worst point of a simplex - The reduced sample point with the smallest function value of a full cell - Y The set of code points - y (i) A code point; a point that represents all the points of thei-th cell - z A random number uniformly distributed in (0,1) - Z (c) The set of points x where [f (c) ] is smaller thanf(x) - []+ Max (0,) - | | Absolute value - The Euclidean norm - f[x(t)] The gradient of the cost function  相似文献   

10.
Thin-walled, unstiffened and stiffened shell structures made of fibre composite materials are frequently applied due to their high stiffness/strength to weight ratios in all fields of lightweight constructions. One major design criterion of these structures is their sensitivity with respect to buckling failure when subjected to inplane compression and shear loads. This paper describes how the structural analysis program BEOS (Buckling of Eccentrically Orthotropic Sandwich shells) is combined with the optimization procedure SAPOP (Structural Analysis Program and Optimization Procedure) to produce a tool for designing optimum CFRP-panels against buckling. Experimental investigations are used to justify the described procedures.Nomenclature C, C b material stiffness matrices (shell, beam stiffener) - f objective function - F cr buckling load - g vector of inequality constraints - K,K g , stiffness matrix, geometrical stiffness matrix, condensed stiffness matrix - n s ,n b vector of stress resultants (shell, beam stiffener) - N x ,N y ,N xy membrane forces of the shell - P x ,P y ,P xy membrane forces of the stiffener - r x ,r y ,r xy radii of curvature - n n-dimensional Euclidean space - W strain energy - u, v, w deformations inx, y, z direction - x, y, z global coordinate system - x vector of design variables - y, z right-hand side, left-hand side eigenvector - variational symbol - i k Kronecker's delta - e s ,e b strain vectors (shell, beam stiffener) - , cr load factor, buckling load factor - , e total energy, external potential energy A (^) sign above a variable points out that this variable belongs to the prebuckling state.  相似文献   

11.
To date the primary focus of most constrained approximate optimization strategies is that application of the method should lead to improved designs. Few researchers have focused on the development of constrained approximate optimization strategies that are assured of converging to a Karush-Kuhn-Tucker (KKT) point for the problem. Recent work by the authors based on a trust region model management strategy has shown promise in managing the convergence of constrained approximate optimization in application to a suite of single level optimization test problems. Using a trust-region model management strategy, coupled with an augmented Lagrangian approach for constrained approximate optimization, the authors have shown in application studies that the approximate optimization process converges to a KKT point for the problem. The approximate optimization strategy sequentially builds a cumulative response surface approximation of the augmented Lagrangian which is then optimized subject to a trust region constraint. In this research the authors develop a formal proof of convergence for the response surface approximation based optimization algorithm. Previous application studies were conducted on single level optimization problems for which response surface approximations were developed using conventional statistical response sampling techniques such as central composite design to query a high fidelity model over the design space. In this research the authors extend the scope of application studies to include the class of multidisciplinary design optimization (MDO) test problems. More importantly the authors show that response surface approximations constructed from variable fidelity data generated during concurrent subspace optimization (CSSOs) can be effectively managed by the trust region model management strategy. Results for two multidisciplinary test problems are presented in which convergence to a KKT point is observed. The formal proof of convergence and the successful MDO application of the algorithm using variable fidelity data generated by CSSO are original contributions to the growing body of research in MDO.Nomenclature k Lagrangian iteration - s approximate minimization iteration - i, j, l variable indices - m number of inequality constraints - n number of design variables - p number of equality constraints - f(x) objective function - g(x) inequality constraint vector - g j (x) j-th inequality constraint - h(x) equality constraint vector - h j (x) i-th equality constraint - c(x) generalized constraint vector - c i (x) i-th generalized constraint - c 1,c 2,c 3,c 4 real constants - m(x) approximate model - q(x) approximate model - q(x) piecewise approximation - r p penalty parameter - t, t 1,t 2 step size length - x design vector, dimensionn - x l l-th design variable - x U upper bound vector, dimensionn - x l U l-th design upper bound - x L lower bound vector, dimensionn - x l L l-th design lower bound - B approximation of the Hessian - K constraints residual - S design space - , 1, 2, scalars - 1, 2 convergence tolerances - 0, 1, 2, , trust region parameters - Lagrange multiplier vector, dimensionm+p - i i-th Lagrange multiplier - trust region ratio - (x) alternative form for inequality constraints - (x, ,r p ) augmented Lagrangian function - approximation of the augmented Lagrangian function - fidelity control - . Euclidean norm - , inner product - gradient operator with respect to design vector x - P(y(x)) projection operator; projects the vector y onto the set of feasible directions at x - trust region radius - x step size  相似文献   

12.
We present a new definition of optimality intervals for the parametric right-hand side linear programming (parametric RHS LP) Problem () = min{c t x¦Ax =b + ¯b,x 0}. We then show that an optimality interval consists either of a breakpoint or the open interval between two consecutive breakpoints of the continuous piecewise linear convex function (). As a consequence, the optimality intervals form a partition of the closed interval {; ¦()¦ < }. Based on these optimality intervals, we also introduce an algorithm for solving the parametric RHS LP problem which requires an LP solver as a subroutine. If a polynomial-time LP solver is used to implement this subroutine, we obtain a substantial improvement on the complexity of those parametric RHS LP instances which exhibit degeneracy. When the number of breakpoints of () is polynomial in terms of the size of the parametric problem, we show that the latter can be solved in polynomial time.This research was partially funded by the United States Navy-Office of Naval Research under Contract N00014-87-K-0202. Its financial support is gratefully acknowledged.  相似文献   

13.
Letf: {0,1} n {0,1} m be anm-output Boolean function inn variables.f is called ak-slice iff(x) equals the all-zero vector for allx with Hamming weight less thank andf(x) equals the all-one vector for allx with Hamming weight more thank. Wegener showed that PI k -set circuits (set circuits over prime implicants of lengthk) are at the heart of any optimum Boolean circuit for ak-slicef. We prove that, in PI k -set circuits, savings are possible for the mass production of anyFX, i.e., any collectionF ofm output-sets given any collectionX ofn input-sets, if their PI k -set complexity satisfiesSC m (FX)3n+2m. This PI k mass production, which can be used in monotone circuits for slice functions, is then exploited in different ways to obtain a monotone circuit of complexity 3n+o(n) for the Neiporuk slice, thus disproving a conjecture by Wegener that this slice has monotone complexity (n 3/2). Finally, the new circuit for the Neiporuk slice is proven to be asymptotically optimal, not only with respect to monotone complexity, but also with respect to combinational complexity.  相似文献   

14.
In many application areas,it is important to detect outliers. The traditional engineering approach to outlier detection is that we start with some normal values x1, ...,xn, compute the sample average E, the sample standard variation , and then mark a value x as an outlier if x is outside the k0-sigma interval [Ek0 , E + k0 ] (for some pre-selected parameter k0).In real life,we often have only interval ranges [ ] for the normal values x1, ...,xn. In this case,we only have intervals of possible values for the bounds and . We can therefore identify outliers as values that are outside all k0-sigma intervals.Once we identify a value as an outlier for a fixed k0, it is also desirable to find out to what degree this value is an outlier, i.e., what is the largest value k0 for which this value is an outlier.In this paper,we analyze the computational complexity of these outlier detection problems, provide efficient algorithms that solve some of these problems (under reasonable conditions), and list related open problems.  相似文献   

15.
Summary Geffert has shown that earch recursively enumerable languageL over can be expressed in the formL{h(x) –1 g(x)x in +} * where is an alphabet andg, h is a pair of morphisms. Our purpose is to give a simple proof for Geffert's result and then sharpen it into the form where both of the morphisms are nonerasing. In our method we modify constructions used in a representation of recursively enumerable languages in terms of equality sets and in a characterization of simple transducers in terms of morphisms. As direct consequences, we get the undecidability of the Post correspondence problem and various representations ofL. For instance,L =(L 0) * whereL 0 is a minimal linear language and is the Dyck reductiona, A.  相似文献   

16.
This paper presents a study of two learning criteria and two approaches to using them for training neural network classifiers, specifically a Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) networks. The first approach, which is a traditional one, relies on the use of two popular learning criteria, i.e. learning via minimising a Mean Squared Error (MSE) function or a Cross Entropy (CE) function. It is shown that the two criteria have different charcteristics in learning speed and outlier effects, and that this approach does not necessarily result in a minimal classification error. To be suitable for classification tasks, in our second approach an empirical classification criterion is introduced for the testing process while using the MSE or CE function for the training. Experimental results on several benchmarks indicate that the second approach, compared with the first, leads to an improved generalisation performance, and that the use of the CE function, compared with the MSE function, gives a faster training speed and improved or equal generalisation performance.Nomenclature x random input vector withd real number components [x 1 ...x d ] - t random target vector withc binary components [t 1 ...t c ] - y(·) neural network function or output vector - parameters of a neural model - learning rate - momentum - decay factor - O objective function - E mean sum-of-squares error function - L cross entropy function - n nth training pattern - N number of training patterns - (·) transfer function in a neural unit - z j output of hidden unit-j - a i activation of unit-j - W ij weight from hidden unit-j to output unit-i - W jl 0 weight from input unit-l to hidden unit-j - j centre vector [ j 1 ... jd ] of RBF unit-j - j width vector [ j 1, ... jd ] of RBF unit-j - p( ·¦·) conditional probability function  相似文献   

17.
In Part I of this study (Rozvanyet al. 1989), general aspects of iterative continuum-based optimality criteria (COC) methods were discussed and the proposed approach was applied to structural optimization problems with freely varying cross-sectional dimensions. In this paper, upper and lower limits on the cross-sectional dimensions, segmentation, allowance for the cost of supports and for selfweight, non-linear and nonseparable cost and stiffness functions and additional stress constraints are considered. The examples include beams with various geometrical properties and plates of variable thickness in plane stress. All results are compared with independently derived analytical or semi-analytical solutions and/or with solutions obtained by a mathematical programming (sequential quadratic programming, SQP) method. The number of elements in beam examples is up to one hundred thousand and in plane stress problems up to 3200 elements are used. Comparisons between computer time requirements for the COC and SQP methods are also presented. In addition, the problem of layout optimization is discussed briefly. The paper is intended to establish the power and versatility of the COC method. Notes. 1. Some less important symbols are defined where they first appear in the text.2. Nondimensional variables are indicated by the sympbol ~.Notation a half beam-span - b width of cross-section - c specific cost factor - k 1,k 2 factors in stress constraints - n number of iterations - P,p load vector, load - q generalized strain vector - r stiffness factor - s flexural stiffness - t reaction cost factor - u, real and adjoint displacement vectors - x coordinate (beam) - x, y coordinates (plate) - x spatial coordinates - z variable cross-sectional dimensions - z a ,z b prescribed minimum and maximum values of a cross-sectional dimension - A active element set - A i area of elementi - D structural domain - D segment - E Young's modulus - G modulus of rigidity - I moment of inertia - L span - [F] flexibility matrix - M, real and adjoint bending moments - N x ,N y ,N xy stress resultants in plane stress - P passive element set - R reaction vector - R d region controlled by a deflection constraint - R region controlled by a prescribed lowest value of a variable dimension - R u region controlled by a prescribed highest value of a variable dimension - R + ,R - region controlled by a flexural stress constraint (with positive or negative moment) - R + , R - region controlled by a shear stress constraint (with positive or negative shear force) - S (z, Q) stress constraints - V, real and adjoint shear forces - W i mutual work of elementi - specific weight - i element length - real and adjoint beam curvatures - Lagrangian (variable) - v Lagrangian (constant) - slack function - slack variable - specific cost - real and adjoint generalized shear strains - v Poisson's ratio - prescribed deflection value - total cost (total weight, objective function) - reaction cost - G-gradient  相似文献   

18.
In 1958 J. Lambek introduced a calculusL of syntactic types and defined an equivalence relation on types: x y means that there exists a sequence x=x1,...,xn=y (n 1), such thatx i x i+1 or xi+ x i (1 i n). He pointed out thatx y if and only if there is joinz such thatx z andy z. This paper gives an effective characterization of this equivalence for the Lambeck calculiL andLP, and for the multiplicative fragments of Girard's and Yetter's linear logics. Moreover, for the non-directed Lambek calculusLP and the multiplicative fragment of Girard's linear logic, we present linear time algorithms deciding whether two types are equal, and finding a join for them if they are.The author was sponsored by project NF 102/62-356 (Structural and Semantic Parallels in Natural Languages and Programming Languages), funded by the Netherlands Organization for the Advancement of Research (N.W.O.).  相似文献   

19.
Viscous Lattices     
Let E be an arbitrary space, and an extensive dilation of P(E) into itself, with an adjoint erosion . Then, the image [P(E)] of P(E) by is a complete lattice P where the sup is the union and the inf the opening of the intersection according to . The lattice L, named viscous, is not distributive, nor complemented. Any dilation on P(E) admits the same expression in L. However, the erosion in L is the opening according to of the erosion in P(E). Given a connection C on P(E) the image of C under turns out to be a connection C on L as soon as (C)eq C. Moreover, the elementary connected openings x of C and (x) are linked by the relation (x) = x. A comprehensive class of connection preverving closings is constructed. Two examples, binary and numerical (the latter comes from the heart imaging), prove the relevance of viscous lattices in interpolation and in segmentation problems.Jean Serra obtained the degree of Mining Engineer, in 1962 in Nancy, France, and in 1967 his Ph.D. for a work dealing with the estimation of the iron ore body of Lorraine by geostatistics. In cooperation with Georges Matheron, he laid the foundations of a new method, that he called Mathematical Morphology (1964). Its purpose was to describe quantitatively shapes and textures of natural phenomena, at micro and macro scales. In 1967, he founded with G. Matheron, the Centre de Morphologie Mathematique, at School of Mines of Paris, on the campus of Fontainebleau. Since this time, he has been working in this framework as a Directeur de Recherches. His main book is a two-volume treatise entitled Image Analysis and Mathematical Morphology (Ac. Press, 1982, 1988). He has been Vice President for Europe of the International Society for Stereology from 1979 to 1983. He founded the International Society for Mathematical Morphology in 1993, and was elected his first president. His achievements include several patents of devices for image processing, various awards and titles, such as the first AFCET award, in 1988, or Doctor Honoris Causa of the University of Barcelona (Spain) in 1993. He recently developed a new theory of segmentation, which is based on set connections (2001–2004), and currently works on colour image processing.  相似文献   

20.
Structural optimization methods are widely used for determining optimum designs of composite structures. The high complexity of the problem and the large number of design variables, such as lamina thicknesses and ply orientation angles of a multilayer composite, makes it difficult to examine large structures with adequate computational effort. Using the finite element method for structural and sensitivity analysis, a large number of finite elements is necessary to achieve a certain accuracy of the analysis results, which also increases computational costs.Adaptive meshing methods start with a coarse mesh and refine it locally during an iterative process, where high discretization error occurs in the structure. Adaptive meshing can be performed within the optimization process to reduce the number of elements during the first iterations of the optimization loop.This paper presents the adaptive refinement of multilayer composite finite element models. The development of a simple error indicator, which takes into account the oriented and multilayer properties of composites, is considered. An optimization system is presented that handles large composite structures using adaptive finite element meshing with the aim to minimize the computational effort at a high accuracy of the analysis and optimization results. An example from engineering practice is given to show the performance of the system.Notation A finite element surface - t element thickness - t i thickness of plyi - e strain error vector - D i material constitutive matrix of plyi - D i * material constitutive matrix of plyi (rotated about fiber orientation angle) - V element volume - FE strain vector (finite element solution) - strain vector (averaged finite element solution) - domain error estimator - h i edge length of elementi - r residuum on element equilibrium - J interelement traction jumps - element boundary - CRP carbon fibre reinforced plastic - Lagrangian function - Lagrangian multiplier - W structural weight - g constraint function - X design variable - X 0 design variable at current design point - u displacement vector - N i shape function of nodei - , rotational degrees of freedom - K E element stiffness matrix - B strain-displacement relation matrix - , , local curvilinear element coordinates - percentage relative error bound  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号