首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents several applications offractional cascading, a new searching technique which has been described in a companion paper. The applications center around a variety of geometric query problems. Examples include intersecting a polygonal path with a line, slanted range search, orthogonal range search, computing locus functions, and others. Some results on the optimality of fractional cascading, and certain extensions of the technique for retrieving additional information are also included.The first author was supported in part by NSF grants MCS 83-03925 and the Office of Naval Research and the Defense Advanced Research Projects Agency under contract N00014-83-K-0146 and ARPA Order No. 4786. Part of this work was done while the second author was employed by the Xerox Palo Alto Research Center.  相似文献   

2.
Hideya Iwasaki 《Software》2002,32(14):1345-1363
TEX allows users to define a macro that abstracts a sequence of typesetting commands. However, defining macros is not easy for most users, because the mechanism of macro expansion in TEX is complicated. As a remedy for this situation, a new system that enables users to define macros for TEX documents as Lisp programs has been developed. The system acts as a preprocessor for TEX; given a document that contains Lisp programs as S‐expressions, the system expands each S‐expression on the basis of Lisp's evaluation rules, thus generating an ordinary TEX document. The system is very flexible and easy‐to‐use, thanks to the underlying language's general‐purpose data structure, i.e. the S‐expression, applicative order evaluation, and rich set of predefined functions. This paper also demonstrates that the proposed system is really effective for practical use by giving some concrete examples of Lisp macros, some of which are difficult to define in terms of TEX commands. The system is currently implemented on the Emacs Lisp, and a user‐friendly environment is thus available in the Emacs text editor. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

3.
Over the past two decades, Scheme macros have evolved into a powerful API for the compiler front end. Like Lisp macros, their predecessors, Scheme macros expand source programs into a small core language; unlike Lisp systems, Scheme macro expanders preserve lexical scoping, and advanced Scheme macro systems handle other important properties such as source location. Using such macros, Scheme programmers now routinely develop the ultimate abstraction: embedded domain-specific programming languages.Unfortunately, a typical Scheme programming environment provides little support for macro development. This lack makes it difficult for programmers to debug their macros and for novices to study the behavior of macros. In response, we have developed a stepping debugger specialized to the concerns of macro expansion. This debugger presents the macro expansion process as a linear rewriting sequence of annotated terms; it graphically illustrates the binding structure of the program as expansion reveals it; and it adapts to the programmer’s level of abstraction, hiding details of syntactic forms that the programmer considers built-in.  相似文献   

4.
The relationship between linear lists and free trees is studied. We examine a number of well-known data structures for computing functions on linear lists and show that they can be canonically transformed into data structures for computing the same functions defined over free trees. This is used to establish new upper bounds on the complexity of several query-answering problems.This work was started when the author was at Brown University, Providence, RI. It was partly supported by NSF Grant MCS 83-03925. A preliminary version of this work has appeared in theProceedings of the 25th Annual IEEE Symposium on Foundations of Computer Science, West Palm Beach, FL, October 1984, pp. 358–368.  相似文献   

5.
6.
Multilisp is a parallel programming language derived from the Scheme dialect of Lisp by addition of thefuture construct. It has been implemented on Concert, a 32-processor shared-memory multiprocessor. A statistics-gathering feature of Concert Multilisp producesparallelism profiles showing the number of processors busy with computing or overhead, as a function of time. Experience gained using parallelism profiles and other measurement tools on several application programs has revealed three basic ways in whichfuture generates concurrency. These ways are illustrated on two example programs: the Lisp mapping functionmapcar and the partitioning routine from Quicksort. Experience with Multilisp programming exposes issues relating to side effects, error and exception handling, low-level operations for explicit manipulation of futures and tasks, and speculative computing, which are also discussed. The basic outlines of Multilisp are now fairly clear and have stood the test of being used for several applications, but further language design work is especially needed in the areas of speculative computing and exception handling.This research was supported in part by the Defense Advanced Research Projects Agency and was monitored by the Office of Naval Research under contract numbers N00014-83-K-0125 and N00014-84-K-0099.  相似文献   

7.
Summary The bulk of arguments that focus on clean semantics and notational simplicity tend to favor uniting the function and value namespaces. In spite of this, there are those who hold strongly to a belief that a two-namespace system affords useful expressive power that they are unwilling to do without. In the end, practical considerations favor the status quo for Common Lisp. There are a large number of improvements beyond a single namespace that could be made to Common Lisp that would clean it up and simplify it. We feel that the time for such radical changes to Common Lisp has passed, and it is the job of future Lisp designers to take the lessons from Common Lisp and Scheme to produce an improved Lisp.This paper is an adaptation of a report produced for X3J13 by the authors, a technical working group engaged in standardizing Common Lisp for ANSI.  相似文献   

8.
A major attraction of functional languages is their support for higher-order functions. This facility increases the expressive power of functional languages by allowing concise and reusable programs to be written. However, higher-order functions are more expensive to execute and to analyse for optimisations.To reduce the performance penalties of using higher-order functions, this paper proposes two transformation algorithms which could automatically removemost higher-order functions from functional programs. The first algorithm uses aneta-expansion technique to eliminate expressions which return function-type results. The second algorithm uses afunction specialisation technique to eliminate expressions with function-type arguments. Together, they remove higher-order functions by transforming each higher-order program to an equivalent first-order (or lower-order) program. We shall prove that the two algorithms areterminating (when applied to well-typed programs) and alsototally-correct (preserving non-strict semantics).A preliminary version of this paper appeared in the Proceedings of 15th Australian Computer Science Conference, Hobart, Tasmania, January 1992.  相似文献   

9.
Pipelining in multi-query optimization   总被引:1,自引:0,他引:1  
Database systems frequently have to execute a set of related queries, which share several common subexpressions. Multi-query optimization exploits this, by finding evaluation plans that share common results. Current approaches to multi-query optimization assume that common subexpressions are materialized. Significant performance benefits can be had if common subexpressions are pipelined to their uses, without being materialized. However, plans with pipelining may not always be realizable with limited buffer space, as we show. We present a general model for schedules with pipelining, and present a necessary and sufficient condition for determining validity of a schedule under our model. We show that finding a valid schedule with minimum cost is NP-hard. We present a greedy heuristic for finding good schedules. Finally, we present a performance study that shows the benefit of our algorithms on batches of queries from the TPCD benchmark.  相似文献   

10.
11.
This paper describesMicroScope, a framework for developing analysis tools for Lisp programs. MicroScope uses a knowledge-intensive approach for program representation and analysis. The analysis tools share a common object oriented program database, and a common Prolog inference engine. The use of Prolog and a declarative representation for programs permits sharing of information, and provides high bandwidth communication between diverse analysis tools. It also supports program specification and debugging activities in the same framework. Extensions to Prolog to support analysis are described, and two tools, theCritic and theExpector, are presented.This work supported in part by Hewlett-Packard Company, the National Science foundation Under Grant Number MCS81-21750 and the Defense Advanced Research Projects Agency under contract number DAAK11-84-K-0017.  相似文献   

12.
In computational geometry many search problems and range queries can be solved by performing an iterative search for the same key in separate ordered lists. In this paper we show that, if these ordered lists can be put in a one-to-one correspondence with the nodes of a graph of degreed so that the iterative search always proceeds along edges of that graph, then we can do much better than the obvious sequence of binary searches. Without expanding the storage by more than a constant factor, we can build a data-structure, called afractional cascading structure, in which all original searches after the first can be carried out at only logd extra cost per search. Several results related to the dynamization of this structure are also presented. A companion paper gives numerous applications of this technique to geometric problems.The first author was supported in part by NSF grants MCS 83-03925 and the Office of Naval Research and the Defense Advanced Research Projects Agency under contract N00014-83-K-0146 and ARPA Order No. 4786. Part of this work was done while the second author was employed by the Xerox Palo Alto Research Center.  相似文献   

13.
The work deals with automatic deductive synthesis of functional programs. Formal specification of a program is taken as a mathematical existence theorem and while proving it, we derive a program and simultaneously prove that this program corresponds to given specification. Several problems have to be resolved for automatic synthesis: the choice of synthesis rules that allows us to derive the basic constructions of a functional program, order of rule application and choice of a particular induction rule. The method proposed here is based on the deductive tableau method. The basic method gives rules for functional program construction. To determine the proof strategy we use some external heuristics, including rippling. And for the induction hypothesis formation the combination of rippling and the deductive tableau method became very useful. The proposed techniques are implemented in the system ALISA (Automatic Lisp Synthesizer) and used for automatic synthesis of several functions in the Lisp language. The work has been partially supported by RFBR grant 05-01-00948a.  相似文献   

14.
The efficiency of common subexpression identification is critical to the performance of multiple-query processing. In this paper, we develop a multigraph for representing and facilitating the processing of multiple queries. In addition to the traditional multiple-query processing approaches in exploiting common subexpressions for identical and subsumption cases, the proposed multigraph processing also covers the overlap case. A performance study shows the viability of this technique when compared to an earlier multigraph approach  相似文献   

15.
Unfolding is a semantics-preserving program transformation technique that consists in the expansion of subexpressions of a program using their own definitions. In this paper we define two unfolding-based transformation rules that extend the classical definition of the unfolding rule (for pure logic programs) to a fuzzy logic setting. We use a fuzzy variant of Prolog where each program clause can be interpreted under a different (fuzzy) logic. We adapt the concept of a computation rule, a mapping that selects the subexpression of a goal involved in a computation step, and we prove the independence of the computation rule. We also define a basic transformation system and we demonstrate its strong correctness, that is, original and transformed programs compute the same fuzzy computed answers. Finally, we prove that our transformation rules always produce an improvement in the efficiency of the residual program, by reducing the length of successful Fuzzy SLD-derivations.  相似文献   

16.
在Office办公软件中,宏为用户提供了功能扩充的平台,能较好地掌握和应用宏,可以达到简化操作、提高工作效率的目的,通过实例体显了在word中宏扩充功能方面起到的大作用,详细介绍了创建、编辑、运行宏等。  相似文献   

17.
智能传感器中MCU的节电设计   总被引:2,自引:0,他引:2  
智能传感器中的微控制器(MCU)在给传感器功能带来很多智能的同时,也应给传感器的节电增加了某种智能,传感器的等待状态就是可利用的途径之一。从节电角度看,传感器的等待状态可分为略忙和全停2种。与此对应,MCU在这2种情况下可分别采用休闲或掉电的策略来实现节电,若将它们综合应用,则可达到最佳节电效果。针对这一思路,提出了相应的综合方法,并以一个基于MCS51系列MCU的光电传感器作为实例,探讨了其具体设计方法和实际节电效果。该方法对其他系列的MCU也具有广泛的适用性。  相似文献   

18.
Stochastic response surface method (SRSM) is a technique used for reliability analysis of complex structural systems having implicit or time consuming limit state functions. The main aspects of the SRSM are the collection of sample points, the approximation of response surface and the estimation of the probability of failure. In this paper, sample points are selected close to the most probable point of failure and the actual limit state surface (LSS). The response surface is fitted using the weighted regression technique, which allows the fitting points to be weighted based on their distance from the LSS. The cumulant generating function (CGF) of the response surface is derived analytically. The saddlepoint approximation (SPA) method is utilized to compute the probability of failure of the structural system. Finally, four numerical examples compare the proposed algorithm with the traditional quadratic polynomial SRSM, Kriging based SRSM and direct MCS.  相似文献   

19.
This paper presents some recent results in interpreter optimization. The techniques of shallow binding and repetitive interpretation of tail recursive functions are adapted to Lisp with static scoping as the binding method for-all identifiers. Then a new technique of interpreting" covered tail recursive" functions is proposed. The purpose of the paper is to show that the extra expense for static scoping can be kept small by combining these techniques.  相似文献   

20.
罗泽林  任强  罗航 《计算机应用》2011,31(11):3143-3148
采用“非正规”二元决策图(BDD)技术获取最小形割集 (MCS)可能存在掩盖非单调底事件作用的弊端”。以“继承”关键技术为基础,提出了用统一编码的“正规”BDD技术来获取非单调关联故障树的MCS。结合Q-M算法,研究了联合获取非单调关联故障树的质蕴涵集(PIS)的完整过程。实际例证表明,所述方法不但能够准确地析出非单调关联故障树的MCS,而且能够自动地获取其PIS。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号