This article presents a GRID framework for distributed computations in the chemical process industries. We advocate a generic agent-based GRID environment in which chemical processes can be represented, simulated, and optimized as a set of autonomous, collaborative software agents. The framework features numerous advantages in terms of scalability, software reuse, security, and distributed resource discovery and utilization. It is a novel example of how advanced distributed techniques and paradigms can be elegantly applied in the area of chemical engineering to support distributed computations and discovery functions in chemical process engineering. A prototype implementation of the proposed framework for chemical process design is presented to illustrate the concepts. 相似文献
Filtering algorithms are well accepted as a means of speeding up the solution of the consistent labeling problem (CLP). Despite the fact that path consistency does a better job of filtering than arc consistency, AC is still the preferred technique because it has a much lower time complexity. We are implementing parallel path consistency algorithms on multiprocessors and comparing their performance to the best sequential and parallel arc consistency algorithms.(1,2) (See also work by Kerethoet al.(3) and Kasif(4)) Preliminary work has shown linear performance increases for parallelized path consistency and also shown that in many cases performance is significantly better than the theoretical worst case. These two results lead us to believe that parallel path consistency may be a superior filtering technique. Finally, we have implemented path consistency as an outer product computation and have obtained good results (e.g., linear speedup on a 64K-node Connection Machine 2). 相似文献
Corner detection is a low-level feature detection operator that is of great use in image processing applications, for example, optical flow and structure from motion by image correspondence. The detection of corners is a computationally intensive operation. Past implementations of corner detection techniques have been restricted to software. In this paper we propose an efficient very large-scale integration (VLSI) architecture for detection of corners in images. The corner detection technique is based on the half-edge concept and the first directional derivative of Gaussian. Apart from the location of the corner points, the algorithm also computes the corner orientation and the corner angle and outputs the edge map of the image. The symmetrical properties of the masks are utilized to reduce the number of convolutions effectively, from eight to two. Therefore, the number of multiplications required per pixel is reduced from 1800 to 392. Thus, the proposed architecture yields a speed-up factor of 4.6 over conventional convolution architectures. The architecture uses the principles of pipelining and parallelism and can be implemented in VLSI. 相似文献
Nowadays any Knowledge Based System (KBS) realization needs of intercommunication among distributed components and to use non-connected and distributed data sources, which poses several challenges to the classical Artificial Intelligence field of KBS.
The multiagent paradigm and the use of ontologies are considered to be suitable tools to face the problems of designing and developing today KBS. On the other hand, using such networked KBS through handheld devices makes more efficient exploitation and interaction with the system.
This paper presents an open and flexible architecture for a distributed KBS and an application of it to construct a system for Psychological Disorders consulting, the so called PDA2 (Psychological Disorder Assistant through PDA). We analyze the main features of the architecture as well as the agent tools we may use to construct it. Additionally, we present a support ontology for Psychological Disorders. 相似文献
The Earth Simulator (ES), developed under the Japanese government’s initiative “Earth Simulator project”, is a highly parallel vector supercomputer system. In this paper, an overview of ES, its architectural features, hardware technology and the result of performance evaluation are described.
In May 2002, the ES was acknowledged to be the most powerful computer in the world: 35.86 teraflop/s for the LINPACK HPC benchmark and 26.58 teraflop/s for an atmospheric general circulation code (AFES). Such a remarkable performance may be attributed to the following three architectural features; vector processor, shared-memory and high-bandwidth non-blocking interconnection crossbar network.
The ES consists of 640 processor nodes (PN) and an interconnection network (IN), which are housed in 320 PN cabinets and 65 IN cabinets. The ES is installed in a specially designed building, 65 m long, 50 m wide and 17 m high. In order to accomplish this advanced system, many kinds of hardware technologies have been developed, such as a high-density and high-frequency LSI, a high-frequency signal transmission, a high-density packaging, and a high-efficiency cooling and power supply system with low noise so as to reduce whole volume of the ES and total power consumption.
For highly parallel processing, a special synchronization means connecting all nodes, Global Barrier Counter (GBC), has been introduced. 相似文献