首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A multicriterion optimization method is proposed for complex systems with parameters ranked by descending importance. This method requires weaker expert estimates for choosing an optimal alternative from the set of equally good solutions by formal specification of functional dependence between ranked parameter weights.Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 167–170, November–December, 1991.  相似文献   

2.
On the Characteristics of Growing Cell Structures (GCS) Neural Network   总被引:2,自引:0,他引:2  
In this paper, a self-developing neural network model, namely the Growing Cell Structures (GCS) is characterized. In GCS each node (or cell) is associated with a local resource counter (t). We show that GCS has the conservation property by which the summation of all resource counters always equals , thereby s is the increment added to (t) of the wining node after each input presentation and (0 < < 1.0) is the forgetting (i.e., decay) factor applied to (t) of non-wining nodes. The conservation property provides an insight into how GCS can maximize information entropy. The property is also employed to unveil the chain-reaction effect and race-condition which can greatly influence the performance of GCS. We show that GCS can perform better in terms of equi-probable criterion if the resource counters are decayed by a smaller .  相似文献   

3.
This paper presents a detailed study of Eurotra Machine Translation engines, namely the mainstream Eurotra software known as the E-Framework, and two unofficial spin-offs – the C,A,T and Relaxed Compositionality translator notations – with regard to how these systems handle hard cases, and in particular their ability to handle combinations of such problems. In the C,A,T translator notation, some cases of complex transfer are wild, meaning roughly that they interact badly when presented with other complex cases in the same sentence. The effect of this is that each combination of a wild case and another complex case needs ad hoc treatment. The E-Framework is the same as the C,A,T notation in this respect. In general, the E-Framework is equivalent to the C,A,T notation for the task of transfer. The Relaxed Compositionality translator notation is able to handle each wild case (bar one exception) with a single rule even where it appears in the same sentence as other complex cases.  相似文献   

4.
This paper presents a project to align and match bilingual English–Chinesenews files downloaded from the China News Services website.The work involves the alignment of bilingual texts at the sentence andclause levels. It addition, the work also requires matching of filesas the English and Chinese news files downloaded from the web do notcome in the same sequential order. These news files have their owncharacteristics and, furthermore, the issue of file-matching has itsunique difficulties apart from the known problems of alignment workpreviously reported in the literature. To align the news files wecombine the criteria of anchors (i.e. unambiguous correspondingtext elements) and sentence length. We employ Dynamic Programming first toalign at the paragraph level, then to align at the sentence-clauselevel. The precision and recall of the alignment are satisfactory forfree translation texts. To match English and Chinese files, we make useof the anchor alone. In file matching we encounter a collision problem due to contending matching candidates, andpropose a recursive splitting algorithm to resolve the problem. Weallow human intervention to improve the precision of matching, andsucceeded in achieving 100% precision with a fairly small amount ofmanual effort. Finally, to determine the various parameters used inaligning and matching, we utilize a Genetic Algorithm software packageto obtain their optimized values.  相似文献   

5.
The problem of choosing a best program for a function presented by example is considered. General axioms for total complexity involving time and size measures are presented. For measures obeying the axioms, certain positive and negative results are obtained on the existence of effective algorithms for learning the best program.  相似文献   

6.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

7.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

8.
Dr. T. Ström 《Computing》1972,10(1-2):1-7
It is a commonly occurring problem to find good norms · or logarithmic norms (·) for a given matrix in the sense that they should be close to respectively the spectral radius (A) and the spectral abscissa (A). Examples may be the certification thatA is convergent, i.e. (A)A<1 or stable, i.e. (A)(A)<0. Often the ordinary norms do not suffice and one would like to try simple modifications of them such as using an ordinary norm for a diagonally transformed matrix. This paper treats this problem for some of the ordinary norms.
Minimisierung von Normen und Logarithmischen Normen durch Diagonale Transformationen
Zusammenfassung Ein oft vorkommendes praktisches Problem ist die Konstruktion von guten Normen · und logarithmischen Normen (·) für eine gegebene MatrixA. Mit gut wird dann verstanden, daß A den Spektralradius (A)=max |1| und (A) die Spektralabszisse (A)=max Re i gut approximieren. Beispiele findet man für konvergente Matrizen wo (A)A<1 gewünscht ist, und für stabile Matrizen wo (A)(A)<0 zu zeigen ist. Wir untersuchen hier, wie weit man mit Diagonaltransformationen und dengewöhnlichsten Normen kommen kann.
  相似文献   

9.
In this paper, we consider the linear interval tolerance problem, which consists of finding the largest interval vector included in ([A], [b]) = {x R n | A [A], b [b], Ax = b}. We describe two different polyhedrons that represent subsets of all possible interval vectors in ([A], [b]), and we provide a new definition of the optimality of an interval vector included in ([A], [b]). Finally, we show how the Simplex algorithm can be applied to find an optimal interval vector in ([A], [b]).  相似文献   

10.
The notion of obvious inference in predicate logic is discussed from the viewpoint of proof-checker applications in logic and mathematics education. A class of inferences in predicate logic is defined and it is proposed to identify it with the class of obvious logical inferences. The definition is compared with other approaches. The algorithm for implementing the obviousness decision procedure follows directly from the definition.  相似文献   

11.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

12.
The paper introduces the concept of Computer-based Informated Environments (CBIEs) to indicate an emergent form of work organisation facilitated by information technology. It first addresses the problem of inconsistent meanings of the informate concept in the literature, and it then focuses on those cases which, it is believed, show conditions of plausible informated environments. Finally, the paper looks at those factors that when found together contribute to building a CBIE. It makes reference to CBIEs as workplaces that comprise a non-technocentric perspective and questions whether CBIEs truly represent an anthropocentric route of information technology.  相似文献   

13.
The language of standard propositional modal logic has one operator ( or ), that can be thought of as being determined by the quantifiers or , respectively: for example, a formula of the form is true at a point s just in case all the immediate successors of s verify .This paper uses a propositional modal language with one operator determined by a generalized quantifier to discuss a simple connection between standard invariance conditions on modal formulas and generalized quantifiers: the combined generalized quantifier conditions of conservativity and extension correspond to the modal condition of invariance under generated submodels, and the modal condition of invariance under bisimulations corresponds to the generalized quantifier being a Boolean combination of and .  相似文献   

14.
A technique to model and to verify distributed algorithms is suggested. This technique (based on Petri nets) reduces the modelling and analysis effort to a reasonable level. The paper outlines the technique using the example of a typical network algorithm, theecho algorithm.Supported by the DFG-projects Verteilte Algorithmen and Konsensalgorithmen  相似文献   

15.
On Bounding Solutions of Underdetermined Systems   总被引:1,自引:0,他引:1  
Sufficient conditions for the existence and uniqueness of a solution x* D (R n ) of Y(x) = 0 where : R n R m (m n) with C 2(D) where D R n is an open convex set and Y = (x)+ are given, and are compared with similar results due to Zhang, Li and Shen (Reliable Computing 5(1) (1999)). An algorithm for bounding zeros of f (·) is described, and numerical results for several examples are given.  相似文献   

16.
Mutual convertibility of bound entangled states under local quantum operations and classical communication (LOCC) is studied. We focus on states associated with unextendible product bases (UPB) in a system of three qubits. A complete classification of such UPBs is suggested. We prove that for any pair of UPBs S and T the associated bound entangled states S and T cannot be converted to each other by LOCC, unless S and T coincide up to local unitaries. More specifically, there exists a finite precision (S,T) > 0 such that for any LOCC protocol mapping S into a probabilistic ensemble (p, ), the fidelity between T and any possible final state satisfies F(T, ) = 1 - (S,T).PACS: 03.65.Bz; 03.67.-a; 89.70+c.  相似文献   

17.
A more accurate identification (estimation of parameters) of simple Markov-Gibbs random field models of images results in a better segmentation of specific multimodal images and realistic synthesis of some types of natural textures. Identification algorithms for segmentation are based in part on a novel modification of an unsupervised learning algorithm published first in Cybernetics and Systems Analysis (Kibernetika i Sistemnyi Analiz) almost four decades ago. A texture synthesis algorithm uses an identified model for selecting a characteristic geometric shape of and a placement grid for texture elements sampled from a training image utilized for the identification.__________Translated from Kibernetika i Sistemnyi Analiz, No. 1, pp. 37–49, January–February 2005.  相似文献   

18.
D. R. Simon stated a problem, so-called Simons problem, whose computational complexity is in the class BQP but not in BPP, where is the function or oracle given in the problem. This result indicates that BPP may be strictly included in its quantum counterpart, BQP. Later, G. Brassard and P. Høyer showed that Simons problem and its extended version can be solved by a deterministic polynomial time quantum algorithm. That is, these problems are in the class EQP. In this paper, we show that Simons problem and its extended version can be deterministically solved in a simpler and more concrete way than that proposed by G. Brassard and P. Høyer.  相似文献   

19.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

20.
Regions-of-Interest and Spatial Layout for Content-Based Image Retrieval   总被引:1,自引:0,他引:1  
To date most content-based image retrieval (CBIR) techniques rely on global attributes such as color or texture histograms which tend to ignore the spatial composition of the image. In this paper, we present an alternative image retrieval system based on the principle that it is the user who is most qualified to specify the query content and not the computer. With our system, the user can select multiple regions-of-interest and can specify the relevance of their spatial layout in the retrieval process. We also derive similarity bounds on histogram distances for pruning the database search. This experimental system was found to be superior to global indexing techniques as measured by statistical sampling of multiple users' satisfaction ratings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号