共查询到20条相似文献,搜索用时 0 毫秒
1.
A distributed processing system based on UNIX (trademark of Bell Laboratories) is currently operational at New Mexico State University. The system which is composed of a variety of PDP-11 and LSI-11 processing elements allows users to schedule tasks to run totally or in parallel on its satellite units. A UNIX-like kernel is run on the satellite processors and this allows virtually any process to run on any processing element. The architecture of the system is a star configuration with a PDP-11/34a as the central or host node. To support experiments in parallel processing, a parallel version of the C programming language has been developed which allows users to write programs as a collection of functional units which can be automatically scheduled to run on the satellite processors. In this paper the structure of the system is described in terms of hardware and software, and our implementation of pc, a parallel C language, is discussed. 相似文献
2.
By viewing different parallel programming paradigms as essentially heterogeneous approaches in mapping ‘real-world’ problems to parallel systems, the authors discuss methodologies in integrating multiple programming models on a massively parallel system such as Connection Machine CM5. Using a dataflow based integration model built in a visualization software AVS, the authors describe a simple, effective and modular way to couple sequential, data-parallel and explicit message-passing modules into an integrated parallel programming environment on a CM5. A case study in the area of numerical advection modeling is given to demonstrate the integration of data-parallel and message-passing modules in the proposed multi-paradigm programming environment. 相似文献
3.
并行结构骨架理论提供了一种描述并行程序设计模式的通用模型,对设计模式进行更高层次的抽象,能有效解决基于设计模式的并行程序设计方法的局限性问题,降低并行程序设计开发难度.基于并行结构骨架的并行程序设计环境--PASBPE在并行结构骨架理论的基础上,使用参数化设置快速生成用户所需并行程序框架,同时通过可视化的程序设计交互环境,简化并行程序的开发过程,提高开发效率. 相似文献
4.
We describe a system that allows programmers to take advantage of both control and data parallelism through multiple intercommunicating data-parallel modules. This programming environment extends C-type stream I/O to include intermodule communication channels. The programmer writes each module as a separate data-parallel program, then develops a channel linker specification describing how to connect the modules together. A channel linker we have developed loads the separate modules on to the parallel machine and binds the communication channels together as specified. We present performance data that demonstrates a mixed control- and data-parallel solution can yield better performance than a strictly data-parallel solution. The system described currently runs on the Intel iWarp multicomputer. 相似文献
6.
The IC* project is an effort to create an environment for the design, specification, and development of complex systems such as communication protocols, parallel machines, and distributed systems. The basis of the project is the IC* model of parallel computation, in which a system is specified by a set of invariant expressions which describe its behavior in time. The features of this model include temporal and structural constraints, inherent parallelism, explicit modeling of time, nondeterministic evolution, and dynamic activation. The project also includes the construction of a parallel computer specifically designed to support the model of computation. The authors discuss the IC* model and the current user language, and describe the architecture and hardware of the prototype supercomputer built to execute IC* programs 相似文献
7.
This paper describes and illustrates a structured programming metalanguage (DPOS) and graphical programming environment for generating and debugging high-level distributed MIMD parallel programs. DPOS introduces an innovative message-passing model and also recursive graphical definition of parallel process networks. It also provides programming and debugging at the meta language level that is portable across implementation languages. The initial development focus of DPOS is to provide a parallel development system for Lisp-based, symbolic and artificial intelligence programs as part of the MAYFLY parallel processing project. The DPOS environment also generates source code and provides a simulation system for graphical debugging and animation of the programs in graph form. 相似文献
8.
The Virtual Programming Laboratory (VPL) is a Web-based virtual programming environment built based on a client–server architecture. The system can be accessed on any platform (Unix, PC or Mac) using a standard Java-enabled browser. Software delivery over the Web imposes a novel set of constraints on design. We outline the tradeoffs in this design space, motivate the choices necessary to deliver an application, and detail the lessons learned in the process. We discuss the role of Java and other Web technologies in the realization of the design. VPL facilitates the development and execution of parallel programs. The initial prototype supports high-level parallel programming based on Fortran 90 and High Performance Fortran (HPF), as well as explicit low-level programming with the MPI message-passing interface. Supplementary Java-based platform-independent tools for data and performance visualization are an integral part of the VPL. Pablo SDDF trace files generated by the Pablo performance instrumentation system are used for post-mortem performance visualization. © 1997 John Wiley & Sons, Ltd. 相似文献
9.
With the growing availability of multiprocessors, a great deal of attention has been given to executing Prolog in parallel. A question that naturally arises is how to execute standard sequential Prolog programs with side effects in parallel. The problem of performing side effects in AND parallel systems has been considered elsewhere. This paper presents a method that generates sequential semantics of side effect predicates in an OR parallel system. First, a general method is given for performing data side effects such as read and write. This method is then extended to control side effects such as asserta, assertz, and retract. Finally, a constant-time algorithm for performing cut is presented.The work of L. V. Kale was supported by the National Science Foundation under Grant NSF-CCR-8700988. The work of D. A. Padua and D. C. Sehr was supported in part by the National Science Foundation under Grant NSF-MIP-8410110, the Department of Energy under Grant DOE DE-FG02-85ER25001, and a donation from the IBM Corporation to the Center for Supercomputing Research and Development. D. C. Sehr holds a fellowship from the Office of Naval Research. 相似文献
10.
The ParaScope Editor is a new kind of interactive parallel programming tool for developing scientific Fortran programs. It assists the knowledgeable user by displaying the results of sophisticated program analyses and by providing editing and a set of powerful interactive transformations. After an edit or parallelism-enhancing transformation, the ParaScope Editor incrementally updates both the analyses and source quickly. This paper describes the underlying implementation of the ParaScope Editor, paying particular attention to the analysis and representation of dependence information and its reconstruction after changes to the program. 相似文献
11.
An mpC language designed specifically for programming high-performance computations on heterogeneous networks is described.
An mpC program explicitly defines an abstract computing network and distributes data, computations, and communications over
it. At runtime, the mpC programming environment uses this information and that about the actual network to distribute the
processes over the actual network so as to execute the program in the most efficient way. Experience in using mpC for solving
problems on local networks consisting of heterogeneous workstations is discussed. 相似文献
12.
Fault tolerance is an issue ignored in most parallel languages. The overhead of making parallel, high-performance programs resilient to processor crashes is often too high, given the low probability of such events. If parallel systems become more large-scaled, however, processor failures will become likely, so they should be dealt with. Two approaches to this problem are feasible. First, the system can make programs fault-tolerant transparently. It can log messages, make checkpoints, and so on. Second, the programmer can write explicit code for handling failures in an application-specific way. The latter approach is potentially more efficient, but also requires more work from the programmer. In this paper, we intend to get some initial insight into how hard and efficient explicit fault-tolerant parallel programming is. We do so by implementing four parallel applications in Argus, a language supporting parallelism as well as fault tolerance. Our experiences indicate that the extra effort needed for fault tolerance varies much between different applications. Also, trade-offs can frequently be made between programming effort and efficiency. One lesson we learned is that fault tolerance should not be added as an afterthought, but is best taken into account from the start. As another result, the ability to integrate transparent and explicit mechanisms for fault tolerance would sometimes be highly useful. 相似文献
13.
In response to problems experienced by the Orbix Generation 3 maintenance and enhancement team, Iona Technologies tried to introduce industry-level best practices by adopting extreme programming. The issues discussed are common for companies moving from startup mode to those supporting numerous customers in need of bug fixes and application enhancements for existing deployment scenarios 相似文献
14.
Marvel is a knowledge-based programming environment that assists software development teams in performing and coordinating their activities. While designing Marvel, several granularity issues were discovered that have a strong impact on the degree of intelligence that can be exhibited, as well as on the friendliness and performance of the environment. The most significant granularity issues include the refinement of software entities in the software database and decomposition of the software tools that process the entities and report their results to the human users. This paper describes the many alternative granularities and explains the choices made for Marvel. 相似文献
15.
In the semiconductor manufacturing industry, production resembles an automated assembly line in which many similar products with slightly different specifications are manufactured step-by-step, with each step being a complicated physiochemical batch process performed by a number of tools. This constitutes a high-mix production system for which effective run-to-run control (RtR) and fault detection control (FDC) can be carried out only if the states of different tools and different products can be estimated. However, since in each production run, a specific product is performed on a specific tool, absolute individual states of products and tools are not observable. In this work, a novel state estimation method based on analysis of variance (ANOVA) is developed to estimate the relative states of each product and tool to the grand average performance of this station in the fab. The method is formulated in the form of a recursive state estimation using the Kalman filter. The advantages of this method are demonstrated using simulations to show that the correct relative states can be estimated in production scenarios such as tool-shift, tool-drift, product ramp-up, tool/product-offline and preventive maintenance (PM). Furthermore, application of this state estimation method in RtR control scheme shows that substantial improvements in process capabilities can be gained, especially for products with small lot counts. The proposed algorithm is also evaluated by an industrial application. 相似文献
16.
In this paper, we report on the role of the Urdu grammar in the Parallel Grammar (ParGram) project (Butt, M., King, T. H., Niño, M.-E., &; Segond, F. (1999). A grammar writer’s cookbook. CSLI Publications; Butt, M., Dyvik, H., King, T. H., Masuichi, H., &; Rohrer, C. (2002). ‘The parallel grammar project’. In: Proceedings of COLING 2002, Workshop on grammar engineering and evaluation, pp. 1–7). The Urdu grammar was able to take advantage of standards in analyses set by the original grammars in order to speed development. However, novel constructions, such as correlatives and extensive complex predicates, resulted in expansions of the analysis feature space as well as extensions to the underlying parsing platform. These improvements are now available to all the project grammars. 相似文献
17.
A software development system based upon integrated skeleton technology (ASSIST) is a proposal of a new programming environment oriented to the development of parallel and distributed high-performance applications according to a unified approach. The main goals are: high-level programmability and software productivity for complex multidisciplinary applications, including data-intensive and interactive software; performance portability across different platforms, in particular large-scale platforms and grids; effective reuse of parallel software; efficient evolution of applications through versions that scale according to the underlying technologies. The purpose of this paper is to show the principles of the proposed approach in terms of the programming model (successive papers will deal with the environment implementation and with performance evaluation). The features and the characteristics of the ASSIST programming model are described according to an operational semantics style and using examples to drive the presentation, to show the expressive power and to discuss the research issues. According to our previous experience in structured parallel programming, in ASSIST we wish to overcome some limitations of the classical skeletons approach to improve generality and flexibility, expressive power and efficiency for irregular, dynamic and interactive applications, as well as for complex combinations of task and data parallelism. A new paradigm, called “parallel module” (parmod), is defined which, in addition to expressing the semantics of several skeletons as particular cases, is able to express more general parallel and distributed program structures, including both data-flow and nondeterministic reactive computations. ASSIST allows the programmer to design the applications in the form of generic graphs of parallel components. Another distinguishing feature is that ASSIST modules are able to utilize external objects, including shared data structures and abstract objects (e.g. CORBA), with standard interfacing mechanisms. In turn, an ASSIST application can be reused and exported as a component for other applications, possibly expressed in different formalisms. 相似文献
19.
JXTA technology, from Sun Microsystems, is a network programming and computing platform that is designed to solve a number of problems in modern distributed computing, especially in the area broadly referred to as peer-to-peer (P2P) computing or P2P networking. JXTA provides a network programming platform specifically designed to be the foundation for P2P systems. As a set of protocols, the technology stays away from APIs and remains independent of programming languages. This means that heterogeneous devices with completely different software stacks can interoperate through JXTA protocols. JXTA technology is also independent of transport protocols. It can be implemented on top of TCP/IP, HTTP, Bluetooth, HomePNA, and many other protocols 相似文献
20.
Visual programming is an appealing technique, which many environments support. It can be applied in a system development process that nonsoftware engineers can perform. The key is to use visual domain specific models. Because there are many different domains, it is economical to develop a generic and configurable visual programming environment (VPE) that can be customized for the domains and paradigms. The author discusses a generic VPE's requirements, design, and implementation, and illustrates its use in a system, the Intelligent Process-Control System (IPCS), for the process control domain. This VPE and the IPCS have been developed in a multiyear research effort. Different versions of the VPE are used at many companies, including Boeing, DuPont, and NASA, and the IPCS has been commercialized by the Osaka Gas Information Systems Research Institute (Osaka, Japan) 相似文献
|