共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Hera is a model-driven methodology for designing Semantic Web Information Systems (SWIS). Based on the principle of separation-of-concerns, Hera defines models to describe the different aspects of an SWIS. These models are represented using RDF, the foundation language of the Semantic Web. Hera is composed of two phases: the data collection phase, which integrates data from different sources, and the presentation generation phase, which builds a hypermedia presentation for the integrated data. The focus of this paper is on the hypermedia presentation generation phase and the associated model specifications. The Hera presentation generation phase has two variants: a static one that computes at once a full Web presentation, and a dynamic one that computes one-page-at-a-time by letting the user influence the next Web page to be presented. The dynamic variant proposes, in addition to the models from the static variant, new models to capture the data resulted from the user's interaction with the system. The implementation is based on a sequence of data transformations applied to the Hera models that eventually produces a hypermedia presentation. 相似文献
3.
Jørgen Steensgaard-Madsen 《Acta Informatica》1979,12(1):73-94
Summary The objectives of this paper are to clarify certain issues in the language Pascal that were left open by the defining Report
and to recommend specific forms for certain extensions of the language that have repeatedly appeared in discussions and even
implementations. By encouraging prospective implementors to adopt a common form, we wish to enhance the prospects for practical
portability of Pascal programs. Except for parameters being procedures and functions, the language constructs described by
the Pascal Report are not changed, i.e. the new syntax permits the same constructs with the same (intended) meaning.
The paper expresses the author's proposal following a discussion among a small group of people who have been working on Pascal
compilers and standardization. In itself it is not an attempt to standardize Pascal or present a complete revision of the
Report. Concentrating on the type concept, the meaning of identifiers, and a limited number of extensions the need of which
is generally accepted, the paper establishes guidelines for standardization and revision of the Pascal Report. It is presented
as a carefully deliberated and (hopefully) consistent proposal for acceptance by whatever means the Pascal community may choose
to adopt.
I wish to thank everyone mentioned for his comments and cooperation, but above all Niklaus Wirth for his encouraging support
and most valuable influence, especially in limiting the number of topics considered in detail. I thank R.D. Tennent and two
anonymous referees for their valuable comments and suggestions. 相似文献
4.
A device called a pushdown assembler has been recently introduced and has been shown capable of defining exactly the syntax directed translations (SDT's). The output operation of the pushdown assembler can be extended in a natural way to obtain a more powerful device called a type B pushdown assembler (or B-machine). A B-machine can define SDT's more simply and directly than the original pushdown assembler. B-machines can also define many interesting translations which are not SDT's. In this paper the B-machine is defined and compared with the original pushdown assembler. The properties of B-machine translations are investigated and it is shown that, as with SDT's, there exists a natural infinite hierarchy of B-machine translations. 相似文献
5.
Similarity-based learning and its extensions 总被引:1,自引:0,他引:1
This paper synthesizes a number of approaches to concept representation and learning in a multilayered model. The paper emphasizes what has been called similarity-based learning (SBL) from examples, although this review is extended to address wider issues. The paper pays particular attention to requirements for incremental and uncertain environments, and to interrelationships among concept purpose, concept representation, and concept learning.
One goal of the paper is to unite some of the notions underlying recent research, in an attempt to construct a more complete and extensible framework. This framework is designed to capture representations and methods such as those based on hypothesis search and bias selection, and to extend the ideas for greater system capability. This leads to a specific perspective for multilayered learning which has several advantages, such as greater clarity, more uniform learning, and more powerful induction.
The approach clarifies and unifies various aspects of the problem of concept learning. Some results'are (1) Various concept representations (such as logic, prototypes, and decision trees) are subsumed by a standard form which is well suited to learning, particularly in incremental and uncertain environments; (2) Concept learning may be enhanced by exploiting a particular phenomenon in many spaces-this phenomenon is a certain kind of smoothness or regularity, one instance of which underlies the similarity in SBL systems; (3) The paper treats the phenomenon in a general way and applies it hierarchically. This has various advantages of uniformity. For example the model allows layered learning algorithms for concept learning all to be instantiations of one basic algorithm. A single kind of representation (an instantiation of the standard form) is prominent at each level. The combination of representation and algorithm allows fast, accurate, concise, and robust concept learning. 相似文献
One goal of the paper is to unite some of the notions underlying recent research, in an attempt to construct a more complete and extensible framework. This framework is designed to capture representations and methods such as those based on hypothesis search and bias selection, and to extend the ideas for greater system capability. This leads to a specific perspective for multilayered learning which has several advantages, such as greater clarity, more uniform learning, and more powerful induction.
The approach clarifies and unifies various aspects of the problem of concept learning. Some results'are (1) Various concept representations (such as logic, prototypes, and decision trees) are subsumed by a standard form which is well suited to learning, particularly in incremental and uncertain environments; (2) Concept learning may be enhanced by exploiting a particular phenomenon in many spaces-this phenomenon is a certain kind of smoothness or regularity, one instance of which underlies the similarity in SBL systems; (3) The paper treats the phenomenon in a general way and applies it hierarchically. This has various advantages of uniformity. For example the model allows layered learning algorithms for concept learning all to be instantiations of one basic algorithm. A single kind of representation (an instantiation of the standard form) is prominent at each level. The combination of representation and algorithm allows fast, accurate, concise, and robust concept learning. 相似文献
6.
《Information and Computation》2007,205(4):557-580
Congruence closure algorithms for deduction in ground equational theories are ubiquitous in many (semi-)decision procedures used for verification and automated deduction. In many of these applications one needs an incremental algorithm that is moreover capable of recovering, among the thousands of input equations, the small subset that explains the equivalence of a given pair of terms. In this paper we present an algorithm satisfying all these requirements. First, building on ideas from abstract congruence closure algorithms, we present a very simple and clean incremental congruence closure algorithm and show that it runs in the best known time O(n log n). After that, we introduce a proof-producing union-find data structure that is then used for extending our congruence closure algorithm, without increasing the overall O(n log n) time, in order to produce a k-step explanation for a given equation in almost optimal time (quasi-linear in k). Finally, we show that the previous algorithms can be smoothly extended, while still obtaining the same asymptotic time bounds, in order to support the interpreted functions symbols successor and predecessor, which have been shown to be very useful in applications such as microprocessor verification. 相似文献
7.
Prof. Dr. R. Krawczyk 《Computing》1980,24(2-3):119-129
For nonlinear operators in partially ordered spaces interval extensions will be defined by means of Lipschitz operators. Assumptions are made for the inclusion monotony of these interval extensions. In this manner we obtain methods for interval iteration to include a solution of an operator equation. By transforming the equation in iterative form a parameter is chosen appropriately, so that the convergence of the interval sequence becomes as fast as possible. 相似文献
8.
9.
10.
Brownian matrices arise in certain models used in digital signal processing. Fast algorithms for solving brownian systems of linear equations can be derived because any brownian matrix is congruent to a tridiagonal matrix. This, and related properties, are developed and extended to matrices having a more general patterned structure. 相似文献
11.
M. A. Gorelov 《Automation and Remote Control》2009,70(8):1396-1405
The paper studies strategy spaces in all possible informational extensions of a game with the natural metric. We show that all these extensions can be considered subspaces of a single space corresponding to a special quasi-informational extension that has a clear meaningful interpretation. We obtain an upper bound on the entropy of informational extensions. 相似文献
12.
Aapo Hyvärinen 《Computational statistics & data analysis》2007,51(5):2499-2512
Many probabilistic models are only defined up to a normalization constant. This makes maximum likelihood estimation of the model parameters very difficult. Typically, one then has to resort to Markov Chain Monte Carlo methods, or approximations of the normalization constant. Previously, a method called score matching was proposed for computationally efficient yet (locally) consistent estimation of such models. The basic form of score matching is valid, however, only for models which define a differentiable probability density function over Rn. Therefore, some extensions of the framework are proposed. First, a related method for binary variables is proposed. Second, it is shown how to estimate non-normalized models defined in the non-negative real domain, i.e. . As a further result, it is shown that the score matching estimator can be obtained in closed form for some exponential families. 相似文献
13.
Alexander Ostermann 《Computing》1990,44(1):59-68
Solving an initial value problem by a Rosenbrock method produces, in general, the numerical solution at (in advance unknown) gridpoints. Applications requiring frequent output (as graphics, delay differential equations, problems with driving equations) normally restrict the stepsize control of these codes and increase the computational overhead considerably. In this paper we introduce a class of continuous extensions for Rosenbrock-type methods. These extensions furnish a continuous numerical solution without affecting the efficiency of the codes. 相似文献
14.
Valdis Berzins 《Acta Informatica》1986,23(6):607-619
Summary The problem of combining independent updates to a program is examined in the context of applicative programs. A partial semantic merge rule is given together with the conditions under which it is guaranteed to be correct, and the conditions under which a string merge corresponds to a semantic merge are examined. The theoretical work reported here contains initial steps towards a solution of the software merging problem and is not sufficient for producing a practical system. 相似文献
15.
16.
17.
Matthew R. Graham Author Vitae Author Vitae Raymond A. de Callafon Author Vitae 《Automatica》2009,45(6):1489-1496
This paper introduces an alternative formulation of the Kalman-Yakubovich-Popov (KYP) Lemma, relating an infinite dimensional Frequency Domain Inequality (FDI) to a pair of finite dimensional Linear Matrix Inequalities (LMI). It is shown that this new formulation encompasses previous generalizations of the KYP Lemma which hold in the case the coefficient matrix of the FDI does not depend on frequency. In addition, it allows the coefficient matrix of the frequency domain inequality to vary affinely with the frequency parameter. One application of this results is illustrated in an example of computing upper bounds to the structured singular value with frequency-dependent scalings. 相似文献
18.
Pattern discovery techniques, such as association rule discovery, explore large search spaces of potential patterns to find
those that satisfy some user-specified constraints. Due to the large number of patterns considered, they suffer from an extreme
risk of type-1 error, that is, of finding patterns that appear due to chance alone to satisfy the constraints on the sample
data. This paper proposes techniques to overcome this problem by applying well-established statistical practices. These allow
the user to enforce a strict upper limit on the risk of experimentwise error. Empirical studies demonstrate that standard
pattern discovery techniques can discover numerous spurious patterns when applied to random data and when applied to real-world
data result in large numbers of patterns that are rejected when subjected to sound statistical evaluation. They also reveal
that a number of pragmatic choices about how such tests are performed can greatly affect their power.
Editor: Johannes Fürnkranz.
An erratum to this article can be found at 相似文献
19.
JIANHUA CHEN 《人工智能实验与理论杂志》2013,25(4):351-363
Abstract The concept of extension plays an important role in default logic. The notion of an ordered seminormal default theory has been introduced (Etherington 1987) to characterize a class of seminormal default theories which have extensions. However, the original definition has a drawback because of its dependence on specific representations of the default theory. We introduce the ‘canonical representation’ of a default theory and redefine the orderedness of a default theory based on its canonical representation. We show that under the new definition, the orderedness of a default theory Δ = (W,D) is intrinsic to the theory itself, independent of the specific representations of W and D. We present a modification of the algorithm in Etherington (1987) for computing extensions of a default theory. More importantly, we prove the conjecture (Etherington 1987) that a modified version of the algorithm in Etherington (1987) converges for general ordered, finite seminormal default theories, while the original algorithm was proven (Etherington 1987) to converge for ordered, finite network default theories which form a proper subset of the theories considered in this paper. 相似文献
20.
The Journal of Supercomputing - Energy efficiency below a specific thermal design power (TDP) has become the main design goal for microprocessors across all market segments. Optimizing the usage of... 相似文献