共查询到20条相似文献,搜索用时 0 毫秒
1.
Any assessment of the cost of encoding one data structure in another must take into account, among other issues, the intended patterns of traversing the guest structure. Two such usage patterns, namely, worstedge traversal and all-edges-equally-likely traversal, are particularly significant, since any bounds on encoding costs relative to these patterns yield bounds relative to large classes of other patterns also. The foregoing remarks are formalized in this paper, and a number of techniques for bounding the costs of encodings relative to these special usage patterns are developed and exemplified. Specifically, data structures are represented here as undirected graphs; and a number of lower bounds on the costs of data encodings are derived by comparing various structural features of the guest and host graphs. Relevant features include both maximum and average vertex-degree, volume, and exposure, a measure of connectivity.A preliminary version of this paper was presented, under the title, Toward a theory of data encoding, at the Conference on Theoretical Computer Science, Waterloo, Ontario, August 15–17, 1977. 相似文献
2.
Arnold L. Rosenberg Larry J. Stockmeyer Lawrence Snyder 《Theoretical computer science》1980,11(2):145-165
Given two finite, directed, edge-labeled graphs, G (the guest) and H (the host) an encoding of G into H is an injection of guest-graph edges into host-graph paths that induces an injection of G vertices into H vertices. An encoding is uniform if like-labeled edges map to like-labeled paths. Encodings and uniform encodings are motivated by data structure representation issues. Although uniform encodings are economically expressed, three types of analysis indicate diseconomies introduced by the uniformity condition. First, space usage (i.e., the size of the host as a function of the size of the guest) can be exponential for ‘natural’ encodings such as uniform encodings of arrays into trees. By contrast, strongly connected hosts of size equal to the guests are adequate for nonuniform encodings. Secondly, the edge dilation under uniform encodings can be nonpolynomial in the size of the guest. By contrast, the same graphs can be nonuniformly encoded with edges mapping to paths of unit length. Thirdly, finding uniform encodings, even when the guest is a line, is PSPACE-complete. By contrast, there is a linear time algorithm for nonuniform encoding of lines in graphs. Additional upper and lower bounds amplify the limitations of uniform data encodings. 相似文献
3.
4.
《The Journal of Logic and Algebraic Programming》2010,79(2):174-188
We develop a general study of the algebraic specification practice, originating from the OBJ tradition, which encodes atomic sentences in logical specification languages as Boolean terms. This practice originally motivated by operational aspects, but also leading to significant increase in expressivity power, has recently become important within the context of some formal verification methodologies mainly because it allows the use of simple equational reasoning for frameworks based on logics that do not have an equational nature. Our development includes a generic rigorous definition of the logics underlying the above mentioned practice, based on the novel concept of ‘quasi-Boolean encoding’, a general result on existence of initial semantics for these logics, and presents a general method for employing Birkhoff calculus of conditional equations as a sound calculus for these logics. The high level of generality of our study means that the concepts are introduced and the results are obtained at the level of abstract institutions (in the sense of Goguen and Burstall [12]) and are therefore applicable to a multitude of logical systems and environments. 相似文献
5.
6.
Vincent Vatter 《Journal of Symbolic Computation》2012,47(3):259-265
We describe a practical algorithm which computes the accepting automaton for the insertion encoding of a permutation class, whenever this insertion encoding forms a regular language. This algorithm is implemented in the accompanying Maple package InsEnc, which can automatically compute the rational generating functions for such classes. 相似文献
7.
8.
One approach for solving Constraint Satisfaction Problems (CSP) (and related Constraint Optimization Problems (COP)) involving integer and Boolean variables is reduction to propositional satisfiability problem (SAT). A number of encodings (e.g., direct, log, support, order) for this purpose exist as well as specific encodings for some constraints that are often encountered (e.g., cardinality constraints, global constraints). However, there is no single encoding that performs well on all classes of problems and there is a need for a system that supports multiple encodings. We present a system that translates specifications of finite linear CSP problems into SAT instances using several well-known encodings, and their combinations. We also present a methodology for selecting a suitable encoding based on simple syntactic features of the input CSP instance. Thorough evaluation has been performed on large publicly available corpora and our encoding selection method improves upon the efficiency of existing encodings and state-of-the-art tools used in comparison. 相似文献
9.
Sancho Salcedo-Sanz Maurizio Naldi Antonio Portilla-Figueras 《Information Sciences》2009,179(20):3461-2259
This paper introduces the oriented-tree network design problem (OTNDP), a general problem of tree network design with several applications in different fields. We also present several adaptations needed by evolutionary algorithms with Cayley-type encodings to tackle the OTNDP. In particular, we present these adaptations in two Cayley-encodings known as Prüfer and Dandelion codes. We include changes in Cayley-encodings to consider rooted trees. We also show how to use a fixed-length encoding for Cayley codes in evolutionary algorithms, and how to guarantee that the optimal solution is included in the search space. Finally, we present several adaptations of the evolutionary algorithm’s operators to deal with Cayley-encodings for the OTNDP. In the experimental part of the paper, we compare the performance of an evolutionary algorithm (implementing the two Cayley-encodings considered) in several OTNDP instances: first, we test the proposed techniques in randomly generated instances, and second, we tackle a real application of the OTNDP: the optimal design of an interactive voice response system (IVR) in a call center. 相似文献
10.
P. Ciarlini 《Calcolo》1974,11(3):341-350
Since the Second Symposium on Symbolic and Algebraic Manipulation (1971) with the development of many algebraic manipulation systems, there has been demand for an «efficiency measurement» for these systems. Some of the frequently used systems, asAltran andCamal, provide for numerical encodings of polynomials and of rational functions. This kind of representation for algebraic structures is particularly useful from the space saving point of view. This paper introduces a methodology for comparing different kinds of encodings for monomials, and hence for polynomials. This new methodological approach to the evaluation of the encodings uses four criteria, which are related both to static and dynamic aspects for algebraic data structures processing. The set of monomials withn variables is considered as a vector space and its topology is discussed. In order to exemplify our approach two particular encodings are considered. According to the proposed criteria it will be shown how the so-called encoding by juxtaposition can be considered very good. A direction for possible development is also presented. 相似文献
11.
Gregory S. Hornby 《Genetic Programming and Evolvable Machines》2006,7(3):231-252
There are various representations for encoding graph structures, such as artificial neural networks (ANNs) and circuits, each with its own strengths and weaknesses. Here we analyze edge encodings and show that they produce graphs with a node creation order connectivity bias (NCOCB). Additionally, depending on how input/output (I/O) nodes are handled, it can be difficult to generate ANNs with the correct number of I/O nodes. We compare two edge encoding languages, one which explicitly creates I/O nodes and one which connects to pre-existing I/O nodes with parameterized connection operators. Results from experiments show that these parameterized operators greatly improve the probability of creating and maintaining networks with the correct number of I/O nodes, remove the connectivity bias with I/O nodes and produce better ANNs. These results suggest that evolution with a representation which does not have the NCOCB will produce better performing ANNs. Finally we close with a discussion on which directions hold the most promise for future work in developing better representations for graph structures.
相似文献
Gregory S. HornbyEmail: |
12.
Peter Korteweg Alberto Marchetti-Spaccamela Leen Stougie Andrea Vitaletti 《Theoretical computer science》2009
In a sensor network the sensors, or nodes, obtain data and have to communicate these data to a central node. Because sensors are battery powered they are highly energy constrained. Data aggregation can be used to combine data of several sensors into a single message, thus reducing sensor communication costs at the expense of message delays. Thus, the main problem of data aggregation is to balance the communication and delay costs. 相似文献
13.
14.
When using formal methods, security protocols are usually modeled at a high level of abstraction. In particular, data encoding and decoding transformations are often abstracted away. However, if no assumptions at all are made on the behavior of such transformations, they could trivially lead to security faults, for example leaking secrets or breaking freshness by collapsing nonces into constants. In order to address this issue this paper formally states sufficient conditions, checkable on sequential code, such that if an abstract protocol model is secure under a Dolev–Yao adversary, then a refined model, which takes into account a wide class of possible implementations of the encoding/decoding operations, is implied to be secure too under the same adversary model. The paper also indicates possible exploitations of this result in the context of methods based on formal model extraction from implementation code and of methods based on automated code generation from formally verified models. 相似文献
15.
《Data Processing》1986,28(1):30-34
Data dictionaries (DD) hold information about data held by an organization. The most important features of DDs to an auditor are security and reporting. DDs can be difficult to compare as there is no agreed standard for their design. One company had problems in finding a suitable DD and implementing it. 相似文献
16.
Our goal is to construct large-scale lexicons for interlingual MT of English, Arabic, Korean, and Spanish. We describe techniques that predict salient linguistic features of a non-English word using the features of its English gloss (i.e., translation) in a bilingual dictionary. While not exact, owing to inexact glosses and language-to-language variations, these techniques can augment an existing dictionary with reasonable accuracy, thus saving significant time. We have conducted two experiments that demonstrate the value of these techniques. The first tested the feasibility of building a database of thematic grids for over 6500 Arabic verbs based on a mapping between English glosses and the syntactic codes in Longman's Dictionary of Contemporary English (LDOCE) (Procter, 1978). We show that it is more efficient and less error-prone to hand-verify the automatically constructed grids than it would be to build the thematic grids by hand from scratch. The second experiment tested the automatic classification of verbs into a richer semantic typology based on (Levin, 1993), from which we can derive a more refined set of thematic grids. In this second experiment, we show that a brute-force, non-robust technique provides 72% accuracy for semantic classification of LDOCE verbs; we then show that it is possible to approach this yield with a more robust technique based on fine-tuned statistical correlations. We further suggest the possibility of raising this yield by taking into account linguistic factors such as polysemy and positive and negative constraints on the syntax-semantics relation. We conclude that, while human intervention will always be necessary for the construction of a semantic classification from LDOCE, such intervention is significantly minimized as more knowledge about the syntax-semantics relation is introduced. 相似文献
17.
程序语言中的共归纳数据类型及其应用 总被引:1,自引:0,他引:1
归纳数据类型利用代数方法从构造的角度归纳地描述数据类型的有限语法结构,但在描述动态行为方面存在一定的不足。作为归纳数据类型的范畴对偶概念,共归纳数据类型利用共代数方法从观察的角度共归纳地描述了数据类型的动态行为。首先,从范畴论和代数的角度给出程序语言中的归纳数据类型定义,并分析了相应的递归操作;接着,利用共代数给出共归纳数据类型的范畴论定义,并根据共归纳数据类型的终结性分析了相应的共递归操作;最后,指出如何利用无双代数及分配律将归纳与共归纳数据类型有机地融合起来,探讨数据类型的语法构造与动态行为关系。 相似文献
18.
19.
Mila E. Majster 《Theoretical computer science》1979,8(1):89-127
A formal framework is proposed for discussing the algebraic properties of data types. In particular, the specification problem, i.e. the problem how a particular data type can be finitely specified, is discussed. Denotational and operational approaches are compared. 相似文献