首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
This article is essentially an extended review with historicalcomments. It looks at an algorithm published in 1943 by M. H.A. Newman, which decides whether a lambda-calculus term is typablewithout actually computing its principal type. Newman's algorithmseems to have been completely neglected by the type-theoristswho invented their own rather different typability algorithmsover 15 years later.  相似文献   

2.
Algorithms can be used to prove and to discover new theorems. This paper shows how algorithmic skills in general, and the notion of invariance in particular, can be used to derive many results from Euclid’s algorithm. We illustrate how to use the algorithm as a verification interface (i.e., how to verify theorems) and as a construction interface (i.e., how to investigate and derive new theorems).The theorems that we verify are well-known and most of them are included in standard number-theory books. The new results concern distributivity properties of the greatest common divisor and a new algorithm for efficiently enumerating the positive rationals in two different ways. One way is known and is due to Moshe Newman. The second is new and corresponds to a deforestation of the Stern-Brocot tree of rationals. We show that both enumerations stem from the same simple algorithm. In this way, we construct a Stern-Brocot enumeration algorithm with the same time and space complexity as Newman’s algorithm. A short review of the original papers by Stern and Brocot is also included.  相似文献   

3.
Optical interconnections attract many engineers and scientists’ attention due to their potential for gigahertz transfer rates and concurrent access to the bus in a pipelined fashion. These unique characteristics of optical interconnections give us the opportunity to reconsider traditional algorithms designed for ideal parallel computing models, such as PRAMs. Since the PRAM model is far from practice, not all algorithms designed on this model can be implemented on a realistic parallel computing system. From this point of view, we study Cole’s pipelined merge sort [Cole R. Parallel merge sort. SIAM J Comput 1988;14:770–85] on the CREW PRAM and extend it in an innovative way to an optical interconnection model, the LARPBS (Linear Array with Reconfigurable Pipelined Bus System) model [Pan Y, Li K. Linear array with a reconfigurable pipelined bus system—concepts and applications. J Inform Sci 1998;106;237–58]. Although Cole’s algorithm is optimal, communication details have not been provided due to the fact that it is designed for a PRAM. We close this gap in our sorting algorithm on the LARPBS model and obtain an O(log N)-time optimal sorting algorithm using O(N) processors. This is a substantial improvement over the previous best sorting algorithm on the LARPBS model that runs in O(log N log log N) worst-case time using N processors [Datta A, Soundaralakshmi S, Owens R. Fast sorting algorithms on a linear array with a reconfigurable pipelined bus system. IEEE Trans Parallel Distribut Syst 2002;13(3):212–22]. Our solution allows efficiently assign and reuse processors. We also discover two new properties of Cole’s sorting algorithm that are presented as lemmas in this paper.  相似文献   

4.
Any mathematical theory of algorithms striving to offer a foundation for programming needs to provide a rigorous definition for an abstract algorithm. The works reported by Girard (1988) in [10] and by Moschovakis (1989, 1995) in [29], [30] and [31] are among the best examples of such attempts. They both try to offer a mathematically precise and rigorous formulation of an abstract algorithm, intend to keep the algorithmic flavour present and take the notion of recursion as primary and central. The present work is motivated by Girard’s GoI 2 paper (Girard (1988) [10], which offers a treatment of recursion in terms of fixed points of linear functions. It is situated in the context of the geometry of interaction (GoI) program and is carried out in the concrete setting of the space of bounded linear maps on a Hilbert space. In this paper, we extend the work in Girard (1988) [10] to the context of traced unique decomposition categories, once again emphasizing the role of abstract trace in the theory of computing. We show that traced unique decomposition categories enriched over partially additive monoids or their variations suffice to axiomatize and hence extend the work in Girard’s GoI 2 paper. The theory developed here allows us to formulate an abstract notion of an algorithm as a pair of morphisms in a traced unique decomposition category, an abstract notion of computation as the execution formula (defined using the trace operator) applied to an algorithm, and finally a notion of deadlock-freeness for algorithms. In addition, we can treat recursive definitions, fixed points and fixed point operators in a uniform way in terms of traced unique decomposition categories.  相似文献   

5.
An expression such as x(P(x)↔?(P)), where P occurs in ?(P), does not always define P. When such an expression implicitly definesP, in the sense of Beth (1953) [1] and Padoa (1900) [13], we call it a recursive definition. In the Least Fixed-Point Logic (LFP), we have theories where interesting relations can be recursively defined (Ebbinghaus, 1995 [4], Libkin, 2004 [12]). We will show that for some sorts of recursive definitions there are explicit definitions on sufficiently strong theories of LFP. It is known that LFP, restricted to finite models, does not have Beth’s Definability Theorem (Gurevich, 1996 [7], Hodkinson, 1993 [8], Dawar, 1995 [3]). Beth’s Definability Theorem states that, if a relation is implicitly defined, then there is an explicit definition for it. We will also give a proof that Beth’s Definability Theorem fails for LFP without this finite model restriction. We will investigate fragments of LFP for which Beth’s Definability Theorem holds, specifically theories whose models are well-founded structures.  相似文献   

6.
We consider linguistic data(base) summaries in the sense of Yager [Information Sciences 28 (1982) 69-86], exemplified by “most employees are young and well paid” (with some degree of truth added), for a personnel database, as an intuitive, human consistent and natural language based knowledge discovery tool. We present first an extension of the classic Yager’s approach to involve more sophisticated criteria of goodness, search methods, etc. We advocate the use of the concept of a protoform (prototypical form), that is recently vividly advocated by Zadeh [A prototype-centered approach to adding deduction capabilities to search engines—the concept of a protoform. BISC Seminar, University of California, Berkeley, 2002], as a general form of a linguistic data summary. We present an extension of our interactive approach, based on fuzzy logic and fuzzy database queries, which makes it possible to implement such linguistic data summaries. We show how fuzzy queries are related to linguistic summaries, and show that one can introduce a hierarchy of protoforms, or abstract summaries in the sense of latest Zadeh’s [A prototype-centered approach to adding deduction capabilities to search engines—the concept of a protoform. BISC Seminar, University of California, Berkeley, 2002] ideas meant mainly for increasing deduction capabilities of search engines. For illustration we show an implementation for a sales database in a computer retailer, employing some type of a protoform of a linguistic summary.  相似文献   

7.
We consider the feedback vertex set and feedback arc set problems on bipartite tournaments. We improve on recent results by giving a 2-approximation algorithm for the feedback vertex set problem. We show that this result is the best that we can attain when using optimal solutions to a certain linear program as a lower bound on the optimal value. For the feedback arc set problem on bipartite tournaments, we show that a recent 4-approximation algorithm proposed by Gupta (2008) [8] is incorrect. We give an alternative 4-approximation algorithm based on an algorithm for the feedback arc set on (non-bipartite) tournaments given by van Zuylen and Williamson (2009) [14].  相似文献   

8.
In a study (Szekely, 1965) [1] of the locomotion of salamanders, it is observed that a ‘doubly periodic traveling wave solution’ of a logical neural network can be used to explain a dynamic pattern of movements. We show here that nonlinear and nonlogical artificial neural network can also be built by means of reaction diffusion principles and existence or nonexistence of doubly periodic traveling waves can be guaranteed by adjusting parameters built into these networks. Our derivations for existence are based on implicit function theorems and the invariance properties of our model; while nonexistence is based on boundedness properties of the polynomial reaction term. We also give illustrative examples as well as comments on the differences between present results with those obtained for linear models studied earlier in Cheng and Lin (2009) [2] and [3].  相似文献   

9.
We show that different theories recently proposed for independent component analysis (ICA) lead to the same iterative learning algorithm for blind separation of mixed independent sources. We review those theories and suggest that information theory can be used to unify several lines of research. Pearlmutter and Parra [1] and Cardoso [2] showed that the infomax approach of Bell and Sejnowski [3] and the maximum likelihood estimation approach are equivalent. We show that negentropy maximization also has equivalent properties, and therefore, all three approaches yield the same learning rule for a fixed nonlinearity. Girolami and Fyfe [4] have shown that the nonlinear principal component analysis (PCA) algorithm of Karhunen and Joutsensalo [5] and Oja [6] can also be viewed from information-theoretic principles since it minimizes the sum of squares of the fourth-order marginal cumulants, and therefore, approximately minimizes the mutual information [7]. Lambert [8] has proposed different Bussgang cost functions for multichannel blind deconvolution. We show how the Bussgang property relates to the infomax principle. Finally, we discuss convergence and stability as well as future research issues in blind source separation.  相似文献   

10.
Typing of lambda-terms in elementary and light affine logic (EAL and LAL, respectively) has been studied for two different reasons: on the one hand the evaluation of typed terms using LAL (EAL, respectively) proof-nets admits a guaranteed polynomial (elementary, respectively) bound; on the other hand these terms can also be evaluated by optimal reduction using the abstract version of Lamping’s algorithm. The first reduction is global while the second one is local and asynchronous. We prove that for LAL (EAL, respectively) typed terms, Lamping’s abstract algorithm also admits a polynomial (elementary, respectively) bound. We also give a proof of its soundness and completeness (for EAL and LAL with type fixpoints), by using a simple geometry of interaction model (context semantics).  相似文献   

11.
Recently, Liu et al. [26] discovered that Certificateless Public Key Encryption (CL-PKE) suffers the Denial-of-Decryption (DoD) attack. Based on CL-PKC, the authors introduced a new paradigm called Self-Generated-Certificate Public Key Cryptography (SGC-PKC) that captured the DoD attack and proposed the first scheme derived from a novel application of Water’s Identity-Based Encryption scheme [43]. In this paper, we propose a new SGC-PKE scheme that does not depend on the bilinear pairings and hence, is more efficient and requires shorter public keys than Liu et al.’s scheme. More importantly, our scheme reaches Girault’s trust level 3 [16] (cf. Girault’s trust level 2 of Liu and Au’s scheme), the same trust level achieved by a traditional PKI. In addition, we also discuss how our scheme can lead to a secure and self-organized key management and authentication system for ad hoc wireless networks with a function of user-controlled key renewal.  相似文献   

12.
We present an approximation algorithm for the hitting set problem when the VC-dimension of the set system is small. Our algorithm uses a linear programming relaxation to compute a probability measure for which ?-nets are always hitting sets (see Corollary 15.6 in Pach and Agarwal [Combinatorial Geometry, J. Wiley, New York, 1995]). The comparable algorithm of Brönnimann and Goodrich [Almost optimal set covers in finite VC-dimension, Discrete Comput. Geom. 14 (1995) 463] computes such a probability measure by an iterative reweighting technique. The running time of our algorithm is comparable with theirs, and the approximation ratio is smaller by a constant factor. We also show how our algorithm can be parallelized and extended to the minimum cost hitting set problem.  相似文献   

13.
A crucial role in the Microsoft .NET Framework Common Language Runtime (CLR) security model is played by type safety of the Common Intermediate Language (CIL). In this paper, we formally prove type safety of a large subset of CIL. To do so, we begin by specifying the static and dynamic semantics of CIL by providing an abstract interpreter for CIL programs. We then formalize the bytecode verification algorithm, whose job it is to compute a well-typing for a given method. We then prove type safety of well-typed methods, i.e., the execution according to the semantics model of legal and well-typed methods does not lead to any run-time type violations. Finally, to prove CIL’s type safety, we show that the verification algorithm is sound, i.e., the typings it produces are well-typings, and complete, i.e., if a well-typing exists, then the algorithm computes one.  相似文献   

14.
We have developed a new algorithm for the calculation of surface tension and contact angle. This method is applied to the motion of water striders. Two types of movement are found in the experiment and these are analyzed by the simulation. The comparison of these results show that there are two distinct propulsion mechanisms. In the conventional breaststroke type, the leg speed must be larger than phase speed of surface wave which is called Denny’s paradox. In a new butterfly type, even the infant water strider can swim and the Denny’s paradox is resolved.  相似文献   

15.
E.J Davison  P Wong 《Automatica》1975,11(3):297-308
A new conjugate-gradient algorithm which minimizes a function of n variables is given. The algorithm performs n orthogonal searches in each stage and hence has the property that it is robust, i.e. it will not easily fail on functions which have a large number of variables (n ? 10) nor on functions which have ‘ridge-like’ properties. A general class of functions called L-functions, which includes the class of quadratic functions as a special case, is defined, and it is shown that the algorithm has the property that it will converge to the minimum of an L-function in n (or less) 1-dimensional minimizations and (n ? 1) (or less) 1-dimensional pseudo-minimizations. Numerical experiments are included for systems of the second to the fortieth order, and based on these experiments, (assuming that the gradients are calculated numerically), the new algorithm appears to be more robust than Powell's [10], Fletcher-Powell's [11], and Jacobson-Oksman's [14] methods, faster than Rosenbrock's [9] method, and especially effective on high dimensional problems.  相似文献   

16.
针对[k]-means算法易受初始中心影响的缺点,提出了基于改进粒子群算法的[k]-means聚类算法[(k]-means cluster algorithm based on Improved PSO,IPK-means),在粒子群算法中加入混沌搜索过程,以增加PSO迭代后期粒子群的多样性,并且在粒子更新过程中,给出了一种动态调整因子公式,使得调整因子与该粒子的适应度值大小相关,即同一迭代中不同粒子也会拥有不同的调整因子。最后将改进的PSO算法应用于[k]-means聚类,为其寻找较好的初始中心,实验结果表明了该算法可取得较好的聚类结果。  相似文献   

17.
Interactive genetic algorithms are effective methods to solve an optimization problem with implicit or fuzzy indices, and have been successfully applied to many real-world optimization problems in recent years. In traditional interactive genetic algorithms, many researchers adopt an accurate number to express an individual’s fitness assigned by a user. But it is difficult for this expression to reasonably reflect a user’s fuzzy and gradual cognitive to an individual. We present an interactive genetic algorithm with an individual’s fuzzy fitness in this paper. Firstly, we adopt a fuzzy number described with a Gaussian membership function to express an individual’s fitness. Then, in order to compare different individuals, we generate a fitness interval based on α-cut set, and obtain the probability of individual dominance by use of the probability of interval dominance. Finally, we determine the superior individual in tournament selection with size two based on the probability of individual dominance, and perform the subsequent evolutions. We apply the proposed algorithm to a fashion evolutionary design system, a typical optimization problem with an implicit index, and compare it with two interactive genetic algorithms, i.e., an interactive genetic algorithm with an individual’s accurate fitness and an interactive genetic algorithm with an individual’s interval fitness. The experimental results show that the proposed algorithm is advantageous in alleviating user fatigue and looking for user’s satisfactory individuals.  相似文献   

18.
This work studies the quantum query complexity of Boolean functions in an unbounded-error scenario where it is only required that the query algorithm succeeds with a probability strictly greater than 1/2. We show that, just as in the communication complexity model, the unbounded-error quantum query complexity is exactly half of its classical counterpart for any (partial or total) Boolean function. Moreover, connecting the query and communication complexity results, we show that the “black-box” approach to convert quantum query algorithms into communication protocols by Buhrman-Cleve—Wigderson [STOC’98] is optimal even in the unbounded-error setting.We also study a related setting, called the weakly unbounded-error setting, where the cost of a query algorithm is given by q+log(1/2(p−1/2)), where q is the number of queries made and p>1/2 is the success probability of the algorithm. In contrast to the case of communication complexity, we show a tight multiplicative Θ(logn) separation between quantum and classical query complexity in this setting for a partial Boolean function. The asymptotic equivalence between them is also shown for some well-studied total Boolean functions.  相似文献   

19.
A Rosenbrock artificial bee colony algorithm (RABC) that combines Rosenbrock’s rotational direction method with an artificial bee colony algorithm (ABC) is proposed for accurate numerical optimization. There are two alternative phases of RABC: the exploration phase realized by ABC and the exploitation phase completed by the rotational direction method. The proposed algorithm was tested on a comprehensive set of complex benchmark problems, encompassing a wide range of dimensionality, and it was also compared with several algorithms. Numerical results show that the new algorithm is promising in terms of convergence speed, success rate, and accuracy. The proposed RABC is also capable of keeping up with the direction changes in the problems.  相似文献   

20.
We first give an elementary proof of the periodicity lemma for strings containing one hole (variously called a “wild card”, a “don’t-care” or an “indeterminate letter” in the literature). The proof is modelled on Euclid’s algorithm for the greatest common divisor and is simpler than the original proof given in [J. Berstel, L. Boasson, Partial words and a theorem of Fine and Wilf, Theoret. Comput. Sci. 218 (1999) 135–141]. We then study the two-hole case, where our result agrees with the one given in [F. Blanchet-Sadri, Robert A. Hegstrom, Partial words and a theorem of Fine and Wilf revisited, Theoret. Comput. Sci. 270 (1-2) (2002) 401–419] but is more easily proved and enables us to identify a maximum-length prefix or suffix of the string to which the periodicity lemma does apply. Finally, we extend our result to three or more holes using elementary methods, and state a version of the periodicity lemma that applies to all strings with or without holes. We describe an algorithm that, given the locations of the holes in a string, computes maximum-length substrings to which the periodicity lemma applies, in time proportional to the number of holes. Our approach is quite different from that used by Blanchet-Sadri and Hegstrom, and also simpler.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号