共查询到20条相似文献,搜索用时 15 毫秒
1.
《Journal of Parallel and Distributed Computing》1988,5(5):517-550
This paper addresses the analysis of subroutine side effects in the ParaScope programming environment, an ambitious collection of tools for developing, understanding, and compiling parallel programs. In spite of significant progress in the optimization of programs for execution on parallel and vector computers, compilers must still be very conservative when optimizing the code surrounding a call site, due to the lack of information about the code in the subroutine being invoked. This has resulted in the development of algorithms for interprocedural analysis of the side effects of a subroutine, which summarize the body of a subroutine, producing approximate information to improve optimization. This paper reviews the effectiveness of these methods in preparing programs for execution on parallel computers. It is shown that existing techniques are insufficient and a new technique, called regular section analysis, is described. Regular section analysis extends the lattice used in previous interprocedural analysis methods to one that is rich enough to represent common array access patterns: elements, rows, columns, and their higher-dimensional analogs. Regular sections are defined, their properties are established, and the modifications to existing interprocedural analysis algorithms required to handle regular sections are presented. Among these modifications are methods for dealing with language features that reshape array parameters at call sites. In addition to improved precision of summary information, we also examine two problems crucial to effective parallelization. The first addresses the need for information about which variables are always redefined as a side effect of a call and the second addresses the requirement that, for parallel programming, information about side effects must be qualified by information about any critical regions in which those side effects take place. These problems are solved by extensions to existing interprocedural dataflow analysis frameworks. 相似文献
2.
3.
In this paper, we report on the role of the Urdu grammar in the Parallel Grammar (ParGram) project (Butt, M., King, T. H., Niño, M.-E., &; Segond, F. (1999). A grammar writer’s cookbook. CSLI Publications; Butt, M., Dyvik, H., King, T. H., Masuichi, H., &; Rohrer, C. (2002). ‘The parallel grammar project’. In: Proceedings of COLING 2002, Workshop on grammar engineering and evaluation, pp. 1–7). The Urdu grammar was able to take advantage of standards in analyses set by the original grammars in order to speed development. However, novel constructions, such as correlatives and extensive complex predicates, resulted in expansions of the analysis feature space as well as extensions to the underlying parsing platform. These improvements are now available to all the project grammars. 相似文献
4.
《Interfaces in Computing》1984,2(2):111-130
A distributed processing system based on UNIX (trademark of Bell Laboratories) is currently operational at New Mexico State University. The system which is composed of a variety of PDP-11 and LSI-11 processing elements allows users to schedule tasks to run totally or in parallel on its satellite units. A UNIX-like kernel is run on the satellite processors and this allows virtually any process to run on any processing element. The architecture of the system is a star configuration with a PDP-11/34a as the central or host node. To support experiments in parallel processing, a parallel version of the C programming language has been developed which allows users to write programs as a collection of functional units which can be automatically scheduled to run on the satellite processors. In this paper the structure of the system is described in terms of hardware and software, and our implementation of pc, a parallel C language, is discussed. 相似文献
5.
Dataflow query execution in a parallel main-memory environment 总被引:2,自引:0,他引:2
In this paper, the performance and characteristics of the execution of various join-trees on a parallel DBMS are studied. The results of this study are a step into the direction of the design of a query optimization strategy that is fit for parallel execution of complex queries.Among others, synchronization issues are identified to limit the performance gain from parallelism. A new hash-join algorithm is introduced that has fewer synchronization constraints than the known hash-join algorithms. Also, the behavior of individual join operations in a join-tree is studied in a simulation experiment. The results show that the introduced Pipelining hash-join algorithm yields a better performance for multi-join queries. The format of the optimal join-tree appears to depend on the size of the operands of the join: A multi-join between small operands performs best with a bushy schedule; larger operands are better off with a linear schedule. The results from the simulation study are confirmed with an analytic model for dataflow query execution. 相似文献
6.
Luo Yinfang 《计算机科学技术学报》1988,3(3):203-213
This paper presents a high-speed multiplication algorithm for the mixed number system of the ordinarybinary number and the symmetric redundant binary number.It is implemented with the multivalned logictheory,and 3-valued and 2-valued circuits are used.The 3-valued circuit proposed in this paper is anemitter-coupled logic circuit with high speed,simplicity and powerful functions.A 3-valued ECL thresholdgate can simultaneously produce six types of one-variable operations.The array multiplier,designed withthe algorithm and the circuits,is fast and simple,and is suitable for building LSI.It can be used in a high-speed computer just as an ordinary binary multiplier. 相似文献
7.
8.
Tanja SuomalainenAuthor Vitae Outi SaloAuthor Vitae 《Journal of Systems and Software》2011,84(6):958-975
Product roadmapping enhances the product development process by enabling early information and long-term decision making about the products in order to deliver the right products to the right markets at the right time. However, relatively little scientific knowledge is available on the application and usefulness of product roadmapping in software product development context. This study develops a framework for software product roadmapping, which is then used to study the critical aspects of the product roadmapping process. The collection of empirical evidence includes both quantitative and qualitative data which sheds further insight into the complexities involved in product roadmapping. Results revealed that organizations view the product roadmap mainly as a tool for strategic decision making as it aims at showing the future directions of the company's products. However, only a few companies appear to have an explicit approach for handling the mechanisms for creating and maintaining such a roadmap. Finally, it is suggested that the strategic importance of product roadmapping is likely to increase in the future and, as a conclusion, a new type of agility is required in order to survive in the turbulent and competitive software business environment. 相似文献
9.
J. LYU A. GUNASEKARAN V. KACHTTVICHYANUKUL 《International journal of systems science》2013,44(6):1333-1341
The availability of more and more cost-effective and powerful parallel computers has enhanced the ability of the operations research community to solve more laborious computational problems. In this paper an attempt has been made to implement a parallel simulation runs dispatcher with an objective to study the feasibility of establishing a portable and efficient parallel programming environment. This parallel simulation run dispatcher can be applied to both terminating type and steady-state type simulation models. The algorithm is then transferred and executed on various other shared-memory multiprocessor systems to illustrate its portability. Another contribution of this paper is to verify whether the performance of the portable code and the non-portable code of a same algorithm is significantly different on a specific parallel system using the analysis of covariance model. 相似文献
10.
Parallel database systems will very probably be the future for high-performance data-intensive applications. In the past decade, many parallel database systems have been developed, together with many languages and approaches to specify operations in these systems. A common background is still missing, however. This paper proposes an extended relational algebra for this purpose, based on the well-known standard relational algebra. The extended algebra provides both complete database manipulation language features, and data distribution and process allocation primitives to describe parallelism. It is defined in terms of multi-sets of tuples to allow handling of duplicates and to obtain a close connection to the world of high-performance data processing. Due to its algebraic nature, the language is well suited for optimization and parallelization through expression rewriting. The proposed language can be used as a database manipulation language on its own, as has been done in the PRISMA parallel database project, or as a formal basis for other languages, like SQL.Recommended by: Patrick Valduriez 相似文献
11.
Kenneth I. Joy 《The Visual computer》1986,2(2):63-71
A Problem Solving Environment (PSE) is an integrated system of application tools that support the solution of a given problem, or a set of related problems. Paramount in the development of such environments is the design, specification and integration of user interface tools that communicate between the application tools of the system and the user. Typically these interactions are object oriented and involve the interaction with tool parameters, which in many applications (CAD/ CAM, Imaging Systems, Image Processing), are represented by graphical data. This paper describes a user-interface tool development system in which both textual and graphical display, and interaction techniques are integrated under a single model. This allows the user to interact with tool parameters in either graphical or textual modes, and to have the parameters displayed in the manner most relevant to the problem set. 相似文献
12.
DSM as a knowledge capture tool in CODE environment 总被引:1,自引:0,他引:1
A design structure matrix (DSM) provides a simple, compact, and visual representation of a complex system/ process. This paper
shows how DSM, a system engineering tool, is applied as a knowledge capture (acquisition) tool in a generic NPD process. The
acquired knowledge (identified in the DSM) is provided in the form of Questionnaires, which are organized into five performance
indicators of the organization namely ‘Marketing’, ‘Technical’, ‘Financial’, ‘Resource Management’, and ‘Project Management’.
Industrial application is carried out for knowledge validation. It is found form the application that the acquired knowledge
helps NPD teams, managers and stakeholders to benchmark their NPD endeavor and select areas to focus their improvement efforts
(up to 80% valid). 相似文献
13.
并行结构骨架理论提供了一种描述并行程序设计模式的通用模型,对设计模式进行更高层次的抽象,能有效解决基于设计模式的并行程序设计方法的局限性问题,降低并行程序设计开发难度.基于并行结构骨架的并行程序设计环境--PASBPE在并行结构骨架理论的基础上,使用参数化设置快速生成用户所需并行程序框架,同时通过可视化的程序设计交互环境,简化并行程序的开发过程,提高开发效率. 相似文献
14.
By viewing different parallel programming paradigms as essentially heterogeneous approaches in mapping ‘real-world’ problems to parallel systems, the authors discuss methodologies in integrating multiple programming models on a massively parallel system such as Connection Machine CM5. Using a dataflow based integration model built in a visualization software AVS, the authors describe a simple, effective and modular way to couple sequential, data-parallel and explicit message-passing modules into an integrated parallel programming environment on a CM5. A case study in the area of numerical advection modeling is given to demonstrate the integration of data-parallel and message-passing modules in the proposed multi-paradigm programming environment. 相似文献
15.
The ParaScope Editor is a new kind of interactive parallel programming tool for developing scientific Fortran programs. It assists the knowledgeable user by displaying the results of sophisticated program analyses and by providing editing and a set of powerful interactive transformations. After an edit or parallelism-enhancing transformation, the ParaScope Editor incrementally updates both the analyses and source quickly. This paper describes the underlying implementation of the ParaScope Editor, paying particular attention to the analysis and representation of dependence information and its reconstruction after changes to the program. 相似文献
16.
Stephen Sum Dorothee Koch Choong Fook Nyen Dragan Domazet Lim Seng San 《Computers in Industry》1996,30(3):225-232
The described framework system has the goal of providing an integration platform for engineering tools to interact. Engineering tools exchange information via the data repository of the framework system. In the European research project ESPRIT EP6896 Concurrent/Simultaneous Engineering System (CONSENS), a Product Information Archive (PIA) is being developed based on the object-oriented database system of the framework, the Object Management System. The Product model is based on a STEP compliant schema. This has been achieved by developing an Application Resource Model (ARM) for the required product information according to a user requirements analysis. The ARM then was mapped to the Integrated Resource Models of STEP which resulted in an object-oriented STEP compliant model. The main objective of PIA is the integration of the product information flows between parallel teams using the framework for product development. This is provided by an interface consisting of a library of functions that enable tools within and external to the framework to access PIA and exchange up-to-date product information. Additionally an X Motif based interface provides human users with direct access possibilities. The framework has been tested by the integration of various tools which support product development. 相似文献
17.
18.
Bagrodia R. Meyer R. Takai M. Yu-An Chen Xiang Zeng Martin J. Ha Yoon Song 《Computer》1998,31(10):77-85
Design and development costs for extremely large systems could be significantly reduced if only there were efficient techniques for evaluating design alternatives and predicting their impact on overall system performance metrics. Due to the systems' analytical intractability, simulation is the most common performance evaluation technique for such systems. However, the long execution times needed for sequential simulation models often hampers evaluation. The slow speeds of sequential model execution have led to growing interest in the use of parallel execution for simulating large-scale systems. Widespread use of parallel simulation, however; has been significantly hindered by a lack of tools for integrating parallel model execution into the overall framework of system simulation. Another drawback to widespread use of simulations is the cost of model design and maintenance. The simulation environment the authors developed at UCLA attempts to address some of these issues. It consists of three primary components: a parallel simulation language called Parsec (parallel simulation environment for complex systems), its GUI, called Pave, and the portable runtime system that implements the simulation algorithms 相似文献
19.
20.
Louise Moody Alan Waterworth Avril D. McCarthy Peter J. Harley Rod H. Smallwood 《Virtual Reality》2008,12(2):77-86
The Sheffield knee arthroscopy training system (SKATS) was originally a visual-based virtual environment without haptic feedback,
but has been further developed as a mixed reality-training environment through the use of tactile augmentation (or passive
haptics). The design of the new system is outlined and then tested. In the first experiment described, the effect of tactile
augmentation on performance is considered by comparing novice performance using the original and mixed reality system. In
the second experiment the mixed reality system is assessed in terms of construct validity by comparing the performance of
users with differing levels of surgical expertise. The results are discussed in terms of the validity of a mixed reality environment
for training knee arthroscopy. 相似文献