首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is well known that every boundary (linear) eNCE graph language that is connected and degree-bounded by a constant is in LOGCFL (NLOG). The present paper proves an upper bound of the complexity of boundary (linear) eNCE graph languages whose component and degree bounds are functions in the size of the graph. As a corollary of this analysis, the known LOGCFL and NLOG results stated above hold even if “connected” is replaced by “component-bounded by log n.” Received: 15 January 1997 / 17 January 2001  相似文献   

2.
In this paper we define bilinear languages, via cutpoints and bilinear automata, and we obtain a necessary condition for a bilinear language to be regular. We also prove that there exists a bilinear language which is nonstochastic.  相似文献   

3.
4.
We investigate various language families which can be obtained from sentential form languages of semi-Thue systems by applying codings, weak codings, non-erasing homomorphisms, and arbitrary homomorphisms. We also distinguish between monotone, context-independent, and context-dependent semi-Thue systems with at most one or finitely many starting words. Finally, we study the effect of erasing productions u → λ.We obtain complete diagrams which show inclusions and incomparabilities of all the language families so defined.  相似文献   

5.
6.
Trace monoids are obtained from free monoids by defining a subsetI of pairs of letters that are allowed to commute. Most of the work of this theory is an attempt to relate the properties of these monoids to the properties ofI. Following the work initiated by Büchi we show that when the reflexive closure ofI is transitive (the trace monoid is then a free product of free commutative monoids) it is possible to define a second-order logic whose models are the traces viewed as dependence graphs and which characterizes exactly the sets of traces that are rational. This logic essentially utilizes a predicate based on the partial ordering defined by the dependence graph and a predicate related to a restricted use of the comparison of cardinality.This research was supported by the PRC Mathématiques et Informatique.  相似文献   

7.
Mahaney and others have shown that sparse self-reducible sets have time-efficient algorithms, and have concluded that it is unlikely that NP has sparse complete sets. Mahaney's work, intuition, and a 1978 conjecture of Hartmanis notwithstanding, nothing has been known about the density of complete sets for feasible classes until now. This paper shows that sparse self-reducible sets have space-efficient algorithms, and in many cases, even have time-space-efficient algorithms. We conclude that NL, NC k , AC k , LOG(DCFL), LOG(CFL), and P lack complete (or even Turing-hard) sets of low density unless implausible complexity class inclusions hold. In particular, if NL (respectively P, k , or NP) has a polylog-sparse logspace-hard set, then NLSC (respectively PSC, k , or PHSC), and if P has subpolynomially sparse logspace-hard sets, then PPSPACE.  相似文献   

8.
Morvan and Stirling have proved that the context-sensitive languages are exactly the traces of graphs defined by transducers with labelled final states. We prove that this result is still true if we restrict to the traces of graphs defined by synchronized transducers with labelled final states. From their construction, we deduce that the context-sensitive languages are the languages of path labels leading from and to rational vertex sets of letter-to-letter rational graphs.  相似文献   

9.
LFC is a functional language based on recursive functions defined in context-free languages.In this paper,a new pattern matching algorithm for LFC is presented,which can represent a sequence of patterns as an integer by an encoding method.It is a rather simple method and produces efficient case-expressions for pattern matching definitions of LFC.The algorithm can also be used for other functional languages,but for nested patterns it may become complicated and further studies are needed.  相似文献   

10.
The cross-section enumeration problem is to list all words of length nn in a regular language LL in lexicographical order. The enumeration problem is to list the first mm words in LL according to radix order. We present an algorithm for the cross-section enumeration problem that is linear in n+tn+t, where tt is the output size. We provide a detailed analysis of the asymptotic running time of our algorithm and that of known algorithms for both enumeration problems. We discuss some shortcomings of the enumeration algorithm found in the Grail computation package. In the practical domain, we modify Mäkinen’s enumeration algorithm to get an algorithm that is usually the most efficient in practice. We performed an extensive performance analysis of the new and previously known enumeration and cross-section enumeration algorithms and found when each algorithm is preferable.  相似文献   

11.
12.
Many interesting applications of concurrent logic languages require the ability to initiate, monitor, and control computations. Two approaches to the implementation of these control functions have been proposed: one based on kernel support and the other on program transformation. The efficiency of the two approaches has not previously been compared. This paper presents an implementation scheme based on kernel support, applicable to both uniprocessor and multiprocessor architectures. Experimental studies on a uniprocessor show the scheme to be more efficient than equivalent program transformations. Analysis of a multiprocessor implementation shows that the scheme imposes little communication and computation overhead.  相似文献   

13.
This paper presents a method for efficient and scalable logo recognition. Using generalized Hough transform to identify local features that are invariant across images, we can efficiently add spatial information into groups of local features and enhance the discriminative power of local feature. Our method is more flexible and efficient compared with state-of-the-art methods that merge features into groups. To fully exploit the information that different logo images provide, we employ a reference-based image representation scheme to represent training and testing images. Experiments on challenging datasets show that our method is efficient and scalable and achieves state-of-the-art performance.  相似文献   

14.
Implementations of operations on general data structures in definitional languages often lead to excessive copying and storage requirements. To partially overcome this problem, users are given facilities to select efficient storage structures or to guide storage allocation. This contradicts the spirit of definitional languages, requiring the user to get involved with implementation details.

This paper presents a method for automatically recognizing excessive copying and optimizing the storage for data structures. Based on analysis of data dependencies, the storage may be reduced from an entire structure to individual elements of the structure. The benefits are especially significant in incremental structures, where only a constant number of elements of a large data structure is modified in each operation. For incremental structures, copying of unchanged parts of the structure is avoided, and unnecessary iterations are eliminated, without involving the user in this consideration. The user is thus relieved of considering the inefficiencies inherent in specifications in definitional languages. The method is applicable to a variety of language processors and computer architectures. The proposed optimization method produces better results than those obtained by explicit storage references.

The paper describes the implementation of the method in the compiler of the MODEL definitional language. First, criteria are presented for recognizing structures that may be optimized. Then, a transformation that removes iterations implied by the operations on incremental structures is described. The method is exemplified by its application to the non-recursive iteration computation of the Ackermann's function.  相似文献   


15.
16.
17.
A device called recursive graph defining context-free languages is described. Previously such a device was used byConway [1] for a compiler design.Conway called it “transition diagram”. Roughly, a recursive graph is a finite set of finite state graphs, which are used in a recursive manner. A recursive graph covers the essential features of both standard devices: It describes the syntactical structure as grammars do, and it represents a method for recognition and parsing as push-down automata do. A notion ofk-determinacy is introduced for recursive graphs and for a restricted kind of them, called simple recursive graphs. Thek-deterministic simple recursive graphs are more general thanLL (k) grammars [5] with respect to parsing-power, but equal with respect to language generation power. The more generalk-deterministic recursive graphs cannot parse the full set ofLR (k)-grammars [4], but they can recognize the full set ofLR (k)-languages. The set of languages recognized by (simple) recursive graphs is the set of context-free languages.  相似文献   

18.
Our goal is to design encryption schemes for mass distribution of data , which enable to (1) deter users from leaking their personal keys, (2) trace the identities of users whose keys were used to construct illegal decryption devices, and (3) revoke these keys as to render the devices dysfunctional. We start by designing an efficient revocation scheme, based on secret sharing. It can remove up to t parties, is secure against coalitions of up to t users, and is more efficient than previous schemes with the same properties. We then show how to enhance the revocation scheme with traitor tracing and self-enforcement properties. More precisely, how to construct schemes such that (1) each user’s personal key contains some sensitive information of that user (e.g., the user’s credit card number), in order to make users reluctant to disclose their keys. (2) An illegal decryption device discloses the identity of users that contributed keys to construct the device. And, (3) it is possible to revoke the keys of corrupt users. For the last point, it is important to be able to do so without publicly disclosing the sensitive information.  相似文献   

19.
The purpose of this paper is to present a stochastic syntactic approach for representation and classification of fingerprint patterns.The fingerprint impressions are subdivided into sampling squares which are preprocessed for feature extraction. First, a brief summary of application of a class of context-free languages for recognition of fingerprints is presented. Next, using the same set of features, a class of stochastic context-free languages was used to further classify the fingerprint impressions. The recognizers using the class of context-free and stochastic context-free languages are named the first-level and second-level classifiers, respectively. Experimental results in terms of real data fingerprints are presented.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号