首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
The training time of ANN depends on size of ANN (i.e. number of hidden layers and number of neurons in each layer), size of training data, their normalization range and type of mapping of training patterns (like X–Y, X–Y, X–Y and X–Y), error functions and learning algorithms. The efforts have been done in past to reduce training time of ANN by selection of an optimal network and modification in learning algorithms. In this paper, an attempt has been made to develop a new neuron model using neuro-fuzzy approach to overcome the problems of ANN incorporating the features of fuzzy systems at a neuron level. Fuzzifying the neuron structure, which incorporates the features of simple neuron as well as high order neuron, has used this synergetic approach.  相似文献   

2.
This paper considers two classes of infinite-dimensional systems described by an abstract differential equationx (t) = (A + BC)x(t),x(0) =x 0, on a Hilbert space, whereA, B, C are linear, possibly unbounded operators and is an unknown, linear, bounded perturbation. The two classes of systems are defined in terms of properties imposed on the triple (A, B, C). It is proved that for every the perturbed system (A + EF, B, C) inherits all the properties of the unperturbed system {A, B, C}) if (A, E, F) and {A, B, F} are in the same class.  相似文献   

3.
Error measures can be used to numerically assess the differences between two images. Much work has been done on binary error measures, but little on objective metrics for grey-scale images. In our discussion here we introduce a new grey-scale measure, g, aiming to improve upon the most common grey-scale error measure, the root-mean-square error. Our new measure is an extension of the authors' recently developed binary error measure, b, not only in structure, but also having both a theoretical and intuitive basis. We consider the similarities between b and g when tested in practice on binary images, and present results comparing g to the root-mean-squared error and the Sobolev norm for various binary and grey-scale images. There are no previous examples where the last of these measures, the Sobolev norm, has been implemented for this purpose.  相似文献   

4.
Dr. T. Ström 《Computing》1972,10(1-2):1-7
It is a commonly occurring problem to find good norms · or logarithmic norms (·) for a given matrix in the sense that they should be close to respectively the spectral radius (A) and the spectral abscissa (A). Examples may be the certification thatA is convergent, i.e. (A)A<1 or stable, i.e. (A)(A)<0. Often the ordinary norms do not suffice and one would like to try simple modifications of them such as using an ordinary norm for a diagonally transformed matrix. This paper treats this problem for some of the ordinary norms.
Minimisierung von Normen und Logarithmischen Normen durch Diagonale Transformationen
Zusammenfassung Ein oft vorkommendes praktisches Problem ist die Konstruktion von guten Normen · und logarithmischen Normen (·) für eine gegebene MatrixA. Mit gut wird dann verstanden, daß A den Spektralradius (A)=max |1| und (A) die Spektralabszisse (A)=max Re i gut approximieren. Beispiele findet man für konvergente Matrizen wo (A)A<1 gewünscht ist, und für stabile Matrizen wo (A)(A)<0 zu zeigen ist. Wir untersuchen hier, wie weit man mit Diagonaltransformationen und dengewöhnlichsten Normen kommen kann.
  相似文献   

5.
Logical Comparison of Inconsistent Perspectives using Scoring Functions   总被引:3,自引:1,他引:2  
The language for describing inconsistency is underdeveloped. If a database (a set of formulae) is inconsistent, there is usually no qualification of that inconsistency. Yet, it would seem useful to be able to say how inconsistent a database is, or to say whether one database is more inconsistent than another database. In this paper, we provide a more general characterization of inconsistency in terms of a scoring function for each database . A scoring function S is from the power set of into the natural numbers defined so that S() gives the number of minimally inconsistent subsets of that would be eliminated if the subset was removed from . This characterization offers an expressive and succinct means for articulating, in general terms, the nature of inconsistency in a set of formulae. We then compare databases using their scoring functions. This gives an intuitive ordering relation over databases that we can describe as more inconsistent than. These techniques are potentially useful in a wide range of problems including monitoring progress in negotiations between a number of participants, and in comparing heterogeneous sources of information.  相似文献   

6.
Semantics connected to some information based metaphor are well-known in logic literature: a paradigmatic example is Kripke semantic for Intuitionistic Logic. In this paper we start from the concrete problem of providing suitable logic-algebraic models for the calculus of attribute dependencies in Formal Contexts with information gaps and we obtain an intuitive model based on the notion of passage of information showing that Kleene algebras, semi-simple Nelson algebras, three-valued ukasiewicz algebras and Post algebras of order three are, in a sense, naturally and directly connected to partially defined information systems. In this way wecan provide for these logic-algebraic structures a raison dêetre different from the original motivations concerning, for instance, computability theory.  相似文献   

7.
Based on the - and -classes of the polynomial-time hierarchy, Schöning [S1], [S3]introduced low and high hierarchies within NP Several classes of sets have been located in the bottom few levels of these hierarchies [S1], [S3], [KS], [BB], [BS2], [AH]. Most results placing sets in the -levels of the low hierarchy are related to sparse sets, and the proof techniques employed involve deterministic enumeration of sparse sets. Balcàzaret al. [BBS]and Allender and Hemachandra [AH] introduced extended low hierarchies, involving sets outside of NP, based on the - and gD-classes of the polynomial-time hierarchy. Several classes of sets have been located in the -levels of these hierarchies as well, and once again most such results involve sparse sets.In this paper we introduce a refinement of the low and high hierarchies and of the extended low hierarchies. Our refinement is based on the -classes of the polynomial-time hierarchy. We show that almost all of the classes of sets that are known to belong to the -levels of the low and extended low hierarchies actually belong to the newly defined -levels of these hierarchies. Our proofs use Kadin's [K1]technique of computing the census of a sparse set first, followed by a nondeterministic enumeration of the set. This leads to the sharper lowness results.We also consider the optimality of these new lowness results. For sets in the -levels of the low hierarchy we have oracle results indicating that substantially stronger results are not possible through use of Kadin's technique. For sets in the -classes of the extended low hierarchy we have tight absolute lower bounds; that is, lower bounds without oracles. These bounds are slightly stronger than similar bounds appearing in [AH].This work was supported in part by NSF Grant CCR-8909071. The second author's current address is Networking Software Division, IBM Research, Triangle Park, NC 27709, USA.  相似文献   

8.
Exact algorithms for detecting all rotational and involutional symmetries in point sets, polygons and polyhedra are described. The time complexities of the algorithms are shown to be (n) for polygons and (n logn) for two- and three-dimensional point sets. (n logn) time is also required for general polyhedra, but for polyhedra with connected, planar surface graphs (n) time can be achieved. All algorithms are optimal in time complexity, within constants.  相似文献   

9.
The temporal property to-always has been proposed for specifying progress properties of concurrent programs. Although the to-always properties are a subset of the leads-to properties for a given program, to-always has more convenient proof rules and in some cases more accurately describes the desired system behavior. In this paper, we give a predicate transformerwta, derive some of its properties, and use it to define to-always. Proof rules for to-always are derived from the properties ofwta. We conclude by briefly describing two application areas, nondeterministic data flow networks and self-stabilizing systems where to-always properties are useful.  相似文献   

10.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

11.
The Gelfond-Lifschitz operator associated with a logic program (and likewise the operator associated with default theories by Reiter) exhibits oscillating behavior. In the case of logic programs, there is always at least one finite, nonempty collection of Herbrand interpretations around which the Gelfond-Lifschitz operator bounces around. The same phenomenon occurs with default logic when Reiter's operator is considered. Based on this, a stable class semantics and extension class semantics has been proposed. The main advantage of this semantics was that it was defined for all logic programs (and default theories), and that this definition was modelled using the standard operators existing in the literature such as Reiter's operator. In this paper our primary aim is to prove that there is a very interestingduality between stable class theory and the well-founded semantics for logic programming. In the stable class semantics, classes that were minimal with respect to Smyth's power-domain ordering were selected. We show that the well-founded semantics precisely corresponds to a class that is minimal w.r.t. Hoare's power domain ordering: the well-known dual of Smyth's ordering. Besides this elegant duality, this immediately suggests how to define a well-founded semantics for default logic in such a way that the dualities that hold for logic programming continue to hold for default theories. We show how the same technique may be applied to strong autoepistemic logic: the logic of strong expansions proposed by Marek and Truszczynski.  相似文献   

12.
Summary Geffert has shown that earch recursively enumerable languageL over can be expressed in the formL{h(x) –1 g(x)x in +} * where is an alphabet andg, h is a pair of morphisms. Our purpose is to give a simple proof for Geffert's result and then sharpen it into the form where both of the morphisms are nonerasing. In our method we modify constructions used in a representation of recursively enumerable languages in terms of equality sets and in a characterization of simple transducers in terms of morphisms. As direct consequences, we get the undecidability of the Post correspondence problem and various representations ofL. For instance,L =(L 0) * whereL 0 is a minimal linear language and is the Dyck reductiona, A.  相似文献   

13.
We consider queueing networks for which the performance measureJ ( ) depends on a parameter , which can be a service time parameter or a buffer size, and we are interested in sensitivity analysis of J ( ) with respect to . We introduce a new method, called customer-oriented finite perturbation analysis (CFPA), which predicts J ( + ) for an arbitrary, finite perturbation from a simulation experiment at . CFPA can estimate the entire performance function (by using a finite number of chosen points and fitting a least-squares approximating polynomial to the observation) within one simulation experiment. We obtain CFPA by reformulating finite perturbation analysis (FPA) for customers. The main difference between FPA and CFPA is that the former calculates the sensitivities of timing epochs of events, such as external arrivals or service time completions, while the latter yields sensitivities of departure epochs of customers. We give sufficient conditions for unbiasedness of CFPA. Numerical examples show the efficiency of the method. In particular, we address sensitivity analysis with respect to buffer sizes and thereby give a solution to the problem for which perturbation analysis was originally built.  相似文献   

14.
On improving the accuracy of the Hough transform   总被引:4,自引:0,他引:4  
The subject of this paper is very high precision parameter estimation using the Hough transform. We identify various problems that adversely affect the accuracy of the Hough transform and propose a new, high accuracy method that consists of smoothing the Hough arrayH(, ) prior to finding its peak location and interpolating about this peak to find a final sub-bucket peak. We also investigate the effect of the quantizations and ofH(, ) on the final accuracy. We consider in detail the case of finding the parameters of a straight line. Using extensive simulation and a number of experiments on calibrated targets, we compare the accuracy of the method with results from the standard Hough transform method of taking the quantized peak coordinates, with results from taking the centroid about the peak, and with results from least squares fitting. The largest set of simulations cover a range of line lengths and Gaussian zero-mean noise distributions. This noise model is ideally suited to the least squares method, and yet the results from the method compare favorably. Compared to the centroid or to standard Hough estimates, the results are significantly better—for the standard Hough estimates by a factor of 3 to 10. In addition, the simulations show that as and are increased (i.e., made coarser), the sub-bucket interpolation maintains a high level of accuracy. Experiments using real images are also described, and in these the new method has errors smaller by a factor of 3 or more compared to the standard Hough estimates.  相似文献   

15.
The calculation of slope (downhill gradient) for a point in a digital elevation model (DEM) is a common procedure in the hydrological, environmental and remote sensing sciences. The most commonly used slope calculation algorithms employed on DEM topography data make use of a three-by-three search window or kernel centered on the grid point (grid cell) in question in order to calculate the slope at that point. A comparison of eight such slope calculation algorithms has been carried out using an artificial DEM, consisting of a smooth synthetic test surface with various amounts of added Gaussian noise. Morrison's Surface III, a trigonometrically defined surface, was used as the synthetic test surface. Residual slope grids were calculated by subtracting the slope grids produced by the algorithms on test from true/reference slope grids derived by analytic partial differentiation of the synthetic surface. The resulting residual slope grids were used to calculate root-mean-square (RMS) residual error estimates that were used to rank the slope algorithms from best (lowest value of RMS residual error) to worst (largest value of RMS residual error). Fleming and Hoffers method gave the best results for slope estimation when values of added Gaussian noise were very small compared to the mean smooth elevation difference (MSED) measured within three-by-three elevation point windows on the synthetic surface. Horns method (used in ArcInfo GRID) performed better than Fleming and Hoffers as a slope estimator when the noise amplitude was very much larger than the MSED. For the large noise amplitude situation the best overall performing method was that of Sharpnack and Akin. The popular Maximum Downward Gradient Method (MDG) performed poorly coming close to last in the rankings, for both situations, as did the Simple Method. A nonogram was produced in terms of standard deviation of the Gaussian noise and MSED values that gave the locus of the trade-off point between Fleming and Hoffers and Horns methods.  相似文献   

16.
Learning to Play Chess Using Temporal Differences   总被引:4,自引:0,他引:4  
Baxter  Jonathan  Tridgell  Andrew  Weaver  Lex 《Machine Learning》2000,40(3):243-263
In this paper we present TDLEAF(), a variation on the TD() algorithm that enables it to be used in conjunction with game-tree search. We present some experiments in which our chess program KnightCap used TDLEAF() to learn its evaluation function while playing on Internet chess servers. The main success we report is that KnightCap improved from a 1650 rating to a 2150 rating in just 308 games and 3 days of play. As a reference, a rating of 1650 corresponds to about level B human play (on a scale from E (1000) to A (1800)), while 2150 is human master level. We discuss some of the reasons for this success, principle among them being the use of on-line, rather than self-play. We also investigate whether TDLEAF() can yield better results in the domain of backgammon, where TD() has previously yielded striking success.  相似文献   

17.
This paper analyses the design sensitivity of a suspension system with material and geometric nonlinearities for a motorcycle structure. The main procedures include nonlinear structural analysis, formulation of the problem with nonlinear dynamic response, design sensitivity analysis, and optimization. The incremental finite element method is used in structural analysis. The stiffness and damping parameters of the suspension system are considered as design variables. The maximum amplitude of nonlinear transient response at the seat is taken as the objective function during the optimization simulation. A more realistic finite element model for the motorcycle structure with elasto-damping elements of different material models is presented. A comparison is made of the optimum designs with and without geometric nonlinear response and is discussed.Nomenclature A amplitude of the excitation function - a 0,a 1 time integration constants for the Newmark method - t+t C s secant viscous damping matrix at timet+t - t C T tangent viscous damping matrix at timet - C linear part of t C T - D i 0 initial value of thei-th design variable - D i instanenous value of thei-th design variables - t+t F(t–1) total internal force vector at the end of iteration (i–1) and timet+t - t+t F (NL) (i–1) nonlinear part of t+t F(i–1) - f frequency of the excitation function - t+t K s secant stiffness matrix at timet+t - t K T tangent stiffness matrix at timet - K linear part of t K T - effective stiffness matrix at timet - L distance between the wheel centres - M constant mass matrix - m T number of solution time steps - NC number of constraint equations - Q nonlinear dynamic equilibrium equation of the structural system - t+t R external applied load vector at timet+t - t e active time interval for the excitation function - t U displacement vector of the finite element assemblage at timet - velocity of the finite element assemblage at timet - t Ü acceleration vector of the finite element assemblage at timet - t+t U (i) displacement vector of the finite element assemblage at the end of iterationi and timet+t - velocity vector of the finite element assemblage at the end of iterationi and timet+t - t+t Ü(i) acceleration vector of the finite element assemblage at the end of iterationi and timet+t - U (i) vector of displacement increments from the end of iteration (i–1) to the end of iterationi at timet+t - V driving speed of motorcycle - x vector of design variable - () quantities of variation - 0 objective function - i i-th constraint equation  相似文献   

18.
This paper proposes the use of accessible information (data/knowledge) to infer inaccessible data in a distributed database system. Inference rules are extracted from databases by means of knowledge discovery techniques. These rules can derive inaccessible data due to a site failure or network partition in a distributed system. Such query answering requires combining incomplete and partial information from multiple sources. The derived answer may be exact or approximate. Our inference process involves two phases to reason with reconstructed information. One phase involves using local rules to infer inaccessible data. A second phase involves merging information from different sites. We shall call such reasoning processes cooperative data inference. Since the derived answer may be incomplete, new algebraic tools are developed for supporting operations on incomplete information. A weak criterion called toleration is introduced for evaluating the inferred results. The conditions that assure the correctness of combining partial results, known as sound inference paths, are developed. A solution is presented for terminating an iterative reasoning process on derived data from multiple knowledge sources. The proposed approach has been implemented on a cooperative distributed database testbed, CoBase, at UCLA. The experimental results validate the feasibility of this proposed concept and can significantly improve the availability of distributed knowledge base/database systems.List of notation Mapping - --< Logical implication - = Symbolic equality - ==< Inference path - Satisfaction - Toleration - Undefined (does not exist) - Variable-null (may or may not exist) - * Subtuple relationship - * s-membership - s-containment - Open subtuple - Open s-membership - Open s-containment - P Open base - P Program - I Interpretation - DIP Data inference program - t Tuples - R Relations - Ø Empty interpretation - Open s-union - Open s-interpretation - Set of mapping from the set of objects to the set of closed objects - W Set of attributes - W Set of sound inference paths on the set of attributes W - Set of relational schemas in a DB that satisfy MVD - + Range closure of W wrt   相似文献   

19.
In a model for a measure of computational complexity, , for a partial recursive functiont, letR t denote all partial recursive functions having the same domain ast and computable within timet. Let = {R t |t is recursive} and let = { |i is actually the running time function of a computation}. and are partially ordered under set-theoretic inclusion. These partial orderings have been extensively investigated by Borodin, Constable and Hopcroft in [3]. In this paper we present a simple uniform proof of some of their results. For example, we give a procedure for easily calculating a model of computational complexity for which is not dense while is dense. In our opinion, our technique is so transparent that it indicates that certain questions of density are not intrinsically interesting for general abstract measures of computational complexity, . (This is not to say that similar questions are necessarily uninteresting for specific models.)Supported by NSF Research Grants GP6120 and GJ27127.  相似文献   

20.
Within AI and the cognitively related disciplines, there exist a multiplicity of uses of belief. On the face of it, these differing uses reflect differing views about the nature of an objective phenomenon called belief. In this paper I distinguish six distinct ways in which belief is used in AI. I shall argue that not all these uses reflect a difference of opinion about an objective feature of reality. Rather, in some cases, the differing uses reflect differing concerns with special AI applications. In other cases, however, genuine differences exist about the nature of what we pre-theoretically call belief. To an extent the multiplicity of opinions about, and uses of belief, echoes the discrepant motivations of AI researchers. The relevance of this discussion for cognitive scientists and philosophers arises from the fact that (a) many regard theoretical research within AI as a branch of cognitive science, and (b) even if theoretical AI is not cognitive science, trends within AI influence theories developed within cognitive science. It should be beneficial, therefore, to unravel the distinct uses and motivations surrounding belief, in order to discover which usages merely reflect differing pragmatic concerns, and which usages genuinely reflect divergent views about reality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号