首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

2.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

3.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

4.
The ongoing integration of LANs and WANs to support global communications and businesses and the emergence of integrated broadband communication services has created an increased demand for cooperation between customers, network and service providers to achieve end-to-end service management. Such a cooperation between autonomous authorities, each defining their own administrative management domains, requires the application of an open standardized framework to facilitate and regulate interworking. Such a framework is given by the ITU-T recommendations on TMN, where the so-called X interface is of particular importance for inter-domain management. In this paper, we explain the role of the TMN X interface within an inter-domain TMN architecture supporting end-to-end communications management. We identify the important issues that need to be addressed for the definition and realization of TMN X interfaces and report about our practical experiences with the implementation of TMN X interfaces in the PREPARE project.  相似文献   

5.
The main stream of legal theory tends to incorporate unwritten principles into the law. Weighing of principles plays a great role in legal argumentation, inter alia in statutory interpretation. A weighing and balancing of principles and other prima facie reasons is a jump. The inference is not conclusive.To deal with defeasibility and weighing, a jurist needs both the belief-revision logic and the nonmonotonic logic. The systems of nonmonotonic logic included in the present volume provide logical tools enabling one to speak precisely about various kinds of rules about rules, dealing with such things as applicability of rules, what is assumed by rules, priority between rules and the burden of proof. Nonmonotonic logic is an example of an extension of the domain of logic. But the more far-reaching the extension is, the greater problems it meets. It seems impossible to make logical reconstruction of the totality of legal argumentation.The lawyers' search for reasons has no obvious end point. Ideally, the search for reasons may end when one arrives at a coherent totality of knowledge. In other words, coherence is the termination condition of reasoning. Both scientific knowledge and knowledge of legal and moral norms progresses by trial and error, and that one must resort to a certain convention to define what error means. The main difference is, however, that conventions of science are much more precise than those of legal scholarship.Consequently, determination of error in legal science is often holistic and circular. The reasons determining that a legal theory is erroneous are not more certain than the contested theory itself. A strict and formal logical analysis cannot give us the full grasp of legal rationality. A weaker logical theory, allowing for nonmonotonic steps, comes closer, at the expense of an inevitable loss of computational efficiency. Coherentist epistemology grasps even more of this rationality, at the expense of a loss of preciseness.  相似文献   

6.
Agent-based technology has been identified as an important approach for developing next generation manufacturing systems. One of the key techniques needed for implementing such advanced systems will be learning. This paper first discusses learning issues in agent-based manufacturing systems and reviews related approaches, then describes how to enhance the performance of an agent-based manufacturing system through learning from history (based on distributed case-based learning and reasoning) and learning from the future (through system forecasting simulation). Learning from history is used to enhance coordination capabilities by minimizing communication and processing overheads. Learning from the future is used to adjust promissory schedules through forecasting simulation, by taking into account the shop floor interactions, production and transportation time. Detailed learning and reasoning mechanisms are described and partial experimental results are presented.  相似文献   

7.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

8.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

9.
On the number of Eulerian orientations of a graph   总被引:2,自引:0,他引:2  
M. Mihail  P. Winkler 《Algorithmica》1996,16(4-5):402-414
An Eulerian orientation of an undirected Eulerian graph is an orientation of the edges of the graph such that for every vertex the in-degree is equal to the out-degree. Eulerian orientations are natural flow-like structures, and Welsh has pointed out that computing their number corresponds to evaluating the Tutte polynomial at the point (0, –2) [JVW], [Wl], and is further equivalent to evaluating ice-type partition functions in statistical physics [W2]. In this paper we resolve the complexity of counting the number of Eulerian orientations of an arbitrary Eulerian graph.We give an efficient randomized approximation algorithm for counting Eulerian orientations of any Eulerian graph. Our algorithm is based on a reduction to counting perfect matchings for a class of graphs for which the methods of Broder [B], Jerrum and Sinclair [JS1], and others [DL] [DS] apply. A crucial step of the reduction is the Monotonicity Lemma (Lemma 3.1) which is of independent combinatorial interest. Roughly speaking, the Monotonicity Lemma establishes the intuitive fact that increasing the number of constraints applied on a flow problem cannot increase the number of solutions. The proof of the lemma involves a new decomposition technique which decouples problematically overlapping structures (a recurrent obstacle in handling large combinatorial populations) and allows detailed enumeration arguments. As a by-product, we exhibit a class of graphs for which perfect and near-perfect matchings are polynomially related, and hence the permanent can be approximated, for reasons other than short augmenting paths (previously the only known approach).We also give the complementary hardness result, namely, that counting exactly Eulerian orientations is #P-complete. Finally, we provide some connections with counting Euler tours.  相似文献   

10.
In 1996, the author attended a seminar on ethics given by C. West Churchman at the University of California, Berkeley. During that year, the author also interviewed Churchman several times regarding the future direction of information sciences in general and the information systems research field in particular. This article is a compilation of the seminar and the interviews. It is set in the context of both Churchman's earlier and his current views of a global god, good, kindness, and caring. C. West Churchman holds that global ethics should lead to the study and design of information systems to solve large and difficult problems of the humankind such as poverty, crime and disease. His Global Ethical Management (GEM) of information sciences translates into abandoning the current goals and boundaries of the information sciences fields and changing what constitutes valid research to globally ethical endeavors.  相似文献   

11.
The first proposals for various component tools of what is now called the translator's workstation or translator's workbench are traced back to the 1970s and early 1980s in various, often independent, proposals at different stages in the development of computers and in their use by translators.  相似文献   

12.
This paper discusses terms which are of mutual importance to the fields of information science and computer science. Specifically we discuss the notions of information and knowledge: their interrelationships as well as their differences, and the concept of value-adding. Concrete examples are used in the discussion.Rainer Kuhlen is professor of Information Science at the University of Konstanz.  相似文献   

13.
A formal model of atomicity in asynchronous systems   总被引:1,自引:0,他引:1  
Summary We propose a generalisation of occurrence graphs as a formal model of computational structure. The model is used to define the atomic occurrence of a program, to characterise interference freeness between programs, and to model error recovery in a decentralised system.  相似文献   

14.
EOL forms     
Maurer  H. A.  Salomaa  A.  Wood  D. 《Acta Informatica》1977,8(1):75-96
Summary In this paper the notions of L form and its interpretations are introduced to define families of structurally similar L systems. The families of L systems thus defined are studied from a number of different points of view. It is felt that this novel approach will shed new light on many questions concerning L systems.  相似文献   

15.
In this paper, an objective conception of contexts based loosely upon situation theory is developed and formalized. Unlike subjective conceptions, which take contexts to be something like sets of beliefs, contexts on the objective conception are taken to be complex, structured pieces of the world that (in general) contain individuals, other contexts, and propositions about them. An extended first-order language for this account is developed. The language contains complex terms for propositions, and the standard predicate ist that expresses the relation that holds between a context and a proposition just in case the latter is true in the former. The logic for the objective conception features a global classical predicate calculus, a local logic for reasoning within contexts, and axioms for propositions. The specter of paradox is banished from the logic by allowing ist to be nonbivalent in problematic cases: it is not in general the case, for any context c and proposition p, that either ist(c,p) or ist(c, ¬ p). An important representational capability of the logic is illustrated by proving an appropriately modified version of an illustrative theorem from McCarthy's classic Blocks World example.  相似文献   

16.
A well-known problem in default logic is the ability of naive reasoners to explain bothg and ¬g from a set of observations. This problem is treated in at least two different ways within that camp.One approach is examination of the various explanations and choosing among them on the basis of various explanation comparators. A typical comparator is choosing the explanation that depends on the most specific observation, similar to the notion of narrowest reference class.Others examine default extensions of the observations and choose whatever is true in any extension, or what is true in all extensions or what is true in preferred extensions. Default extensions are sometimes thought of as acceptable models of the world that are discarded as more knowledge becomes available.We argue that the notions of specificity and extension lack clear semantics. Furthermore, we show that the problems these ideas were supposed to solve can be handled easily within a probabilistic framework.  相似文献   

17.
Zusammenfassung In der folgenden Arbeit werden zunächst die Begriffe Gesamtschrittverfahren, Einzelschrittverfahren und Relaxationsverfahren allgemein formuliert und dann auf allgemeine lineare Gleichungssysteme angewandt. Im Spezialfall einer Matrix mit verschwindender Hauptdiagonale erhält man so die bekanntenJacobi-, Gauss-Seidel- und Relaxationsverfahren. Satz 1 macht eine Aussage über die Konvergenz des Einzelschrittverfahrens bei allgemeinen, nicht-negativen Matrizen. Der Beweis verläuft ähnlich wie in einem bereits 1948 vonStein undRosenberg [2] behandelten Spezialfall. Als Korollar ergibt sich eine Aussage über die Konvergenz des Relaxationsverfahrens bei nicht-negativen Matrizen. Es wird ferner der Satz 2 über die Konvergenz des Relaxationsverfahrens bei diagonaldominanten Matrizen beweisen.
Summary In this paper we give a general definition what is meant by total-step-, single-step- and successive relaxation iterative method and we apply these concepts on systems of linear equations. In the special case of a matrix with zero diagonal entries we obtain the well knownJacobi-, Gauss-Seidel- and Relaxation iterative method. Theorem 1 gives conditions for the convergence of the singlestep-iterative method for general, non-negative matrices. The proof is similar to that given byStein andRosenberg in [2] (1948) for a special case. A corollary gives conditions for the convergence of the relaxation-iterative method for non-negative matrices. Further on we prove theorem 2 about the convergence of the relaxation-iterative method with diagonally dominant matrices.
  相似文献   

18.
This paper proposes a notion of presence which overcomes its strict interpretation in terms of physical projection and perception, and interprets the sense of being there as the understanding of the meaning of what is going on there. The domain of control environments is considered to illustrate this point and to propose technological solutions combining principles and techniques taken from Artificial Intelligence and Computer Supported Cooperative Work to enhance the interpretation and cooperation capabilities of the involved actors.  相似文献   

19.
This paper suggests ways in which the pattern-matching capability of the computer can be used to further our understanding of stylized ballad language. The study is based upon a computer-aided analysis of the entire 595,000- word corpus of Francis James Child'sThe English and Scottish Popular Ballads (1882–1892), a collection of 305 textual traditions, most of which are represented by a variety of texts. The paper focuses on the Mary Hamilton tradition as a means of discussing the function of phatic language in the ballad genre and the significance of textual variation.Cathy Lynn Preston is a Research Associate, Computer Research in the Humanities, at the University of Colorado, Boulder. She is interested in folklore, particularly oral narrative; popular literature of the 18th- and 19th-century, particularly broadside and chapbook; the works of John Gay, Jonathan Swift, Thomas Hardy; Middle English romance and lyric. Her major publications areA KWIC Concordance to Jonathan Swift's A Tale of a Tub, The Battle of the Books, and A Discourse Concerning the Mechanical Operation of the Spirit, A Fragment, (New York: Garland Publishing, 1984) (co-authored with Harold D. Kelling), andA KWIC Concordance to Thomas Hardy's Tess of the d'Urbervilles, (New York: Garland Publishing, 1989).  相似文献   

20.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号