首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The superblock: An effective technique for VLIW and superscalar compilation   总被引:8,自引:1,他引:7  
A compiler for VLIW and superscalar processors must expose sufficient instruction-level parallelism (ILP) to effectively utilize the parallel hardware. However, ILP within basic blocks is extremely limited for control-intensive programs. We have developed a set of techniques for exploiting ILP across basic block boundaries. These techniques are based on a novel structure called thesuperblock. The superblock enables the optimizer and scheduler to extract more ILP along the important execution paths by systematically removing constraints due to the unimportant paths. Superblock optimization and scheduling have been implemented in the IMPACT-I compiler. This implementation gives us a unique opportunity to fully understand the issues involved in incorporating these techniques into a real compiler. Superblock optimizations and scheduling are shown to be useful while taking into account a variety of architectural features.  相似文献   

2.
Tamiya Onodera 《Software》1993,23(5):477-485
In language systems that support separate compilation, we often observe that header files are internalized over and over again when the source files that depend on them are compiled. Making a compiler a long-lived server eliminates such redundant processing of header files, thus reducing the compilation time. The paper first describes compilation servers for C-family languages in general, and then a compilation server for our C-based object-oriented language in particular. The performance results of our server show that a compilation server can substantially shorten the compilation time.  相似文献   

3.
Gang Luo  Tong Chen  Hao Yu 《Software》2007,37(9):909-933
For user‐friendliness purposes, many modern software systems provide progress indicators for long‐running tasks. These progress indicators continuously estimate the percentage of the task that has been completed and when the task will finish. However, none of the existing program compilation tools provide a non‐trivial progress indicator, although it often takes minutes or hours to build a large program. In this paper, we investigate the problem of supporting such progress indicators. We first discuss the goals and challenges inherent in this problem. Then we present a set of techniques that are sufficient for implementing a simple yet useful progress indicator for program compilation. Finally, we report on an initial implementation of these techniques in GNU Make. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
The experts may have difficulty in expressing all their preferences over alternatives or criteria, and produce the incomplete linguistic preference relation. Consistency plays an important role in estimating unknown values from an incomplete linguistic preference relation. Many methods have been developed to obtain a complete linguistic preference relation based on additive consistency, but some unreasonable values may be produced in the estimation process. To overcome this issue, we propose a new characterisation about multiplicative consistency of the linguistic preference relation, present an algorithm to estimate missing values from an incomplete linguistic preference relation, and establish a decision support system for aiding the experts to complete their linguistic preference relations in a more consistent way. Some examples are also given to illustrate the proposed methods.  相似文献   

5.
ABSTRACT

To ensure the reasonable application and perfect the theory of decision making with interval multiplicative preference relations (IMPRs), this paper continues to discuss decision making with IMPRs. After reviewing previous consistency concepts for IMPRs, we find that Krej?í’s consistency concept is more flexible and natural than others. However, it is insufficient to address IMPRs only using this concept. Considering this fact, this paper researches inconsistent and incomplete IMPRs that are usually encountered. First, programming models for addressing inconsistent and incomplete IMPRs are constructed. Then, this paper studies the consensus of individual IMPRs and defines a consensus index using the defined correlation coefficient. When the consensus requirement does not satisfy requirement, a programming model for improving consensus level is built, which can ensure the consistency. Subsequently, a procedure for group decision making with IMPRs is offered, and associated examples are provided to specifically show the application of main theoretical results.  相似文献   

6.
Tiled multi-core architectures have become an important kind of multi-core design for its good scalability and low power consumption. Stream programming has been productively applied to a number of important application domains. It provides an attractive way to exploit the parallelism. However, the architecture characteristics of large amounts of cores, memory hierarchy and exposed communication between tiles have presented a performance challenge for stream programs running on tiled multi-cores. In this paper, we present StreamTMC, an efficient stream compilation framework that optimizes the execution of stream applications for the tiled multi-core. This framework is composed of three optimization phases. First, a software pipelining schedule is constructed to exploit the parallelism. Second, an efficient hybrid of SPM and cache buffer allocation algorithm and data copy elimination mechanism is proposed to improve the efficiency of the data access. Last, a communication aware mapping is proposed to reduce the network communication and synchronization overhead. We implement the StreamTMC compiler on Godson-T, a 64-core tiled architecture and conduct an experimental study to verify the effectiveness. The experimental results indicate that StreamTMC can achieve an average of 58% improvement over the performance before optimization.  相似文献   

7.
Computer architecture design requires careful attention to the balance between the complexity of code scheduling problems and the cost and feasibility of building a machine. In this paper, we show that recently developed software pipelining algorithms produce optimal or near-optimal code for a large class of loops when the target architecture is a clean pipelined parallel machine. The important feature of these machines is the absence of structural hazards. We argue that the robustness of the scheduling algorithms and relatively simple hardware make these machines realistic and cost-effective. To illustrate the delicate balance between architecture and scheduling complexity, we show that scheduling with structural hazards is NP-hard, and that there are machines with simple structural hazards for which vectorization and the software pipelining techniques generate poor code.Supported in part by NSF Grants DCR-8502884, CCR-8704367, ONR Grant N00014-86-K-0215, and the Cornell NSF Supercomputing Center.Supported by NSF Grant CCR-8702668 and an IBM Faculty Development Award.  相似文献   

8.
This paper presents an algorithm for optimization of programs at the compilation stage that analyzes execution of various program segments at available processor frequencies and selects an energy-effective schedule of frequencies with regard to the constraints of the arising additional time of operation. This algorithm is complemented by an operating-system manager that controls setting of a desired processor mode and resolves conflicts in multi-program environment.  相似文献   

9.
10.
Software evolution studies have traditionally focused on individual products. In this study we scale up the idea of software evolution by considering software compilations composed of a large quantity of independently developed products, engineered to work together. With the success of libre (free, open source) software, these compilations have become common in the form of ‘software distributions’, which group hundreds or thousands of software applications and libraries into an integrated system. We have performed an exploratory case study on one of them, Debian GNU/Linux, finding some significant results. First, Debian has been doubling in size every 2 years, totalling about 300 million lines of code as of 2007. Second, the mean size of packages has remained stable over time. Third, the number of dependencies between packages has been growing quickly. Finally, while C is still by far the most commonly used programming language for applications, use of the C++, Java, and Python languages have all significantly increased. The study helps not only to understand the evolution of Debian, but also yields insights into the evolution of mature libre software systems in general.
Daniel M. GermanEmail:

Jesus M. Gonzalez-Barahona   teaches and researches in Universidad Rey Juan Carlos, Mostoles (Spain). His research interests include libre software development, with a focus on quantitative and empirical studies, and distributed tools for collaboration in libre software projects. He works in the GSyC/LibreSoft research team, . Gregorio Robles   is Associate Professor at the Universidad Rey Juan Carlos, where he earned his PhD in 2006. His research interests lie in the empirical study of libre software, ranging from technical issues to those related to the human resources of the projects. Martin Michlmayr   has been involved in various free and open source software projects for well over 10 years. He acted as the leader of the Debian project for two years and currently serves on the board of the Open Source Initiative (OSI). Martin works for HP as an Open Source Community Expert and acts as the community manager of FOSSBazaar. Martin holds Master degrees in Philosophy, Psychology and Software Engineering, and earned a PhD from the University of Cambridge. Juan José Amor   has a M.Sc. in Computer Science from the Universidad Politécnica de Madrid and he is currently pursuing a Ph.D. at the Universidad Rey Juan Carlos, where he is also a project manager. His research interests are related to libre software engineering, mainly effort and schedule estimates in libre software projects. Since 1995 he has collaborated in several libre software organizations; he is also co-founder of LuCAS, the best known libre software documentation portal in Spanish, and Hispalinux, the biggest spanish Linux user group. He also collaborates with and Linux+. Daniel M. German   is associate professor of computer science at the University of Victoria, Canada. His main areas of interest are software evolution, open source software engineering and intellectual property.   相似文献   

11.
12.
Auriga, an experimental simulator that utilizes five compilation techniques to reduce runtime complexity and promote concurrency in the simulation of VHDL models is described. Auriga is designed to translate a model using any VHDL construct into an optimized, parallel simulation. Auriga's distributed simulation uses a message-passing network to simulate a single VHDL model. The authors present results obtained with seven benchmark models to illustrate the compiler's aggressive optimization techniques: temporal analysis, waveform propagation, input desensitization, concurrent evaluation, and statement compaction  相似文献   

13.
Language Resources and Evaluation - Research on speech technologies necessitates spoken data, which is usually obtained through read recorded speech, and specifically adapted to the research needs....  相似文献   

14.
A new and powerful approach to threading is proposed, that is designed to improve the responsiveness of concurrent logic programs for distributed, real-time AI applications. The technique builds on previously proposed scheduling techniques to improve responsiveness by synchronously passing control and data directly from a producer to a consumer. Furthermore, synchronous transfer of data requires less buffering and so less garbage is produced. Arguments are also passed in registers, further reducing overheads.  相似文献   

15.
In this article we describe production compilation, a mechanism for modeling skill acquisition. Production compilation has been developed within the ACT-Rational (ACT-R; J. R. Anderson, D. Bothell, M. D. Byrne, & C. Lebiere, 2002) cognitive architecture and consists of combining and specializing task-independent procedures into task-specific procedures. The benefit of production compilation for researchers in human factors is that it enables them to test the strengths and weaknesses of their task analyses and user models by allowing them to model the learning trajectory from the main task level and the unit task level down to the key-stroke level. We provide an example of this process by developing and describing a model learning a simulated air traffic controller task. Actual or potential applications of this research include the evaluation of user interfaces, the design of systems that support learning, and the building of user models.  相似文献   

16.
17.
18.
《Automatica》1987,23(1):41-55
The graph model for conflicts is developed as a comprehensive methodology for realistically analyzing real world conflicts. The graph form takes outcomes, rather than individual decisions, as the basic units for describing a conflict. In the graph form, many solution concepts can be formulated for both two-player and multiplayer games. In particular, specific mathematical criteria are presented for categorizing solution concepts which can be used for predicting equilibria in n-player games. One of the criteria on which this taxonomy of solution concepts is based is the number of steps ahead a player may think, in terms of the reactions of other players to his actions. Other criteria include which players take part in the sanctioning process, and whether sanctioning moves are restricted to those which lead to immediate improvements for the mover. In order to demonstrate the insight which decision makers can gain from studying a dispute using the graph model, various solution concepts are applied to an important environmental engineering problem.  相似文献   

19.
Actuator fault diagnosis: an adaptive observer-based technique   总被引:1,自引:0,他引:1  
This paper presents a novel approach for the fault diagnosis of actuators in known deterministic dynamic systems by using an adaptive observer technique. Systems without model uncertainty are initially considered, followed by a discussion of a general situation where the system is subjected to either model uncertainty or external disturbance. Under the assumption that the system state observer can be designed such that the observation error is strictly positive real (SPR), an adaptive diagnostic algorithm is developed to diagnose the fault, and a modified version is proposed for the general system to improve robustness. The method is demonstrated through its application to a simulated second-order system  相似文献   

20.
View integration: a step forward in solving structural conflicts   总被引:1,自引:0,他引:1  
Thanks to the development of the federated systems approach on the one hand and the emphasis on user involvement in database design on the other, the interest in schema integration techniques is significantly increasing. Theories, methods and tools have been proposed. Conflict resolution is the key issue. Different perceptions by schema designers may lead to different representations. A way must be found to support these different representations within a single system. Most current integration methodologies rely on modification of initial schemas to solve the conflicts. This approach needs a strong interaction with the database administrator, who has authority to modify the initial schemas. This paper presents an approach to view integration specifically intended to support the coexistence of different representations of the same real-world objects. The main characteristics of this approach are the following: automatic resolution of structural conflicts, conflict resolution performed without modification of initial views, use of a formal declarative approach for user definition of inter-view correspondences, applicability to a variety of data models, and automatic generation of structural and operational mappings between the views and the integrated schema. Allowing users' views to be kept unchanged should result in improved user satisfaction. Each user is able to define his own view of the database, without having to conform to some other user's view. Moreover, such a feature is essential in database integration if existing programs are to be preserved  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号