首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
The French Atomic Energy Authority (Commissariat àl'Énergie Atomique or CEA) started to investigate electronic computing in 1952 and in 1956 created a specialized team, trained in England. It bought its first digital computers in 1957. From then on it continually acquired the most powerful computing equipment available (IBM Stretch, IBM 360/91, CDC 7600) to meet the needs of both civilian and military nuclear research. For many years, the CEA was IBM's largest customer outside the United States. In 1972 the Computer Division of the "Civilian CEA" was subsidiarized into a leading European software company, the CISI.  相似文献   

2.
Our increased ability to model and measure a wide variety of phenomena has left us awash in data. In the immediate future, the authors anticipate collecting data at the rate of terabytes per day from many classes of applications, including simulations running on teraFLOPS-class computers and experimental data produced by increasingly more sensitive and accurate instruments, such as telescopes, microscopes, particle accelerators and satellites. Generating or acquiring data is not an end in itself but a vehicle for obtaining insights. While data analysis and reduction have a role to play, in many situations we achieve understanding only when a human being interprets the data. Visualization has emerged as an important tool for extracting meaning from the large volumes of data that scientific instruments and simulations produce. The authors describe an online system that supports 3D tomographic image reconstruction-and subsequent collaborative analysis-of data from remote scientific instruments  相似文献   

3.
Gathering a group of remotely located engineers to design a vehicle can be difficult, especially if they live in different countries. To overcome this obstacle, we-a team at the National Center for Supercomputing Applications (NCSA) in the US in partnership with Germany's National Research Center for Information Technology (GMD) developed a collaborative virtual prototyping system for Caterpillar. The Virtual Prototyping System (VPS) will let engineers in Belgium and the US work together on vehicle designs using distributed virtual reality. The system supports collaborative design review and interactive redesign. Integrated real-time video transmissions let engineers see other participants in a shared virtual environment at each remote site's viewpoint position and orientation. Any number of remotely located sites may join in the shared VE, communicating via multicast. The system has been tested with three sites at NCSA  相似文献   

4.
The development of powerful computers and faster input/output devices coupled with the need for storing and analyzing data have resulted in massive databases (of the order of terabytes). Such volumes of data clearly overwhelm more traditional data analysis methods. A new generation of tools and techniques are needed for finding interesting patterns in the data and discovering useful knowledge. In this paper we present the design of more effective and efficient genetic algorithm based data mining techniques that use the concepts of self-adaptive feature selection together with a wrapper feature selection method based on Hausdorff distance measure.  相似文献   

5.
Nanyang Technological University virtual campus [virtual reality project]   总被引:1,自引:0,他引:1  
The idea for building a VR model of the NTU campus came about six years ago when the School of Computer Engineering purchased a powerful graphics workstation with advanced modeling software systems, MultiGen-Paradigm's MultiGen and Vega. Three-dimensional Web visualization developed rapidly. Personal computers became capable of making VR walkthroughs even in shared virtual worlds. Cybertown created by Tony Rockliff and Pascal Baudar with Virtual Reality Modeling Language (VRML) on the Blaxxun Platform, inspired us to put our virtual campus on the Web.  相似文献   

6.
We have hydrodynamically explored the dependence on spatial dimension of the viability of the neutrino heating mechanism of core-collapse supernova explosions and find that the tendency to explode is a monotonically increasing function of dimension. Moreover, we find that the delay to explosion for a given neutrino luminosity is always shorter in 3D than 2D, sometimes by many hundreds of milliseconds. The magnitude of this dimensional effect is much larger than the purported magnitude of a variety of other effects sometimes invoked to bridge the gap between the current ambiguous and uncertain theoretical situation and the fact of robust supernova explosions in Nature. Our finding, facilitated by access to state-of-the-art codes and large computers, may be an important step towards unraveling one of the most problematic puzzles in stellar astrophysics.  相似文献   

7.
The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized, as well as efforts at the NASA Lewis Research Center. The latter efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.  相似文献   

8.
Sharply different from the well-known use of various computer programs for the numerical aspects of mathematics and logic is the newer and less familiar use of a program to address the logical reasoning aspects. In this article, we focus on such a program — the automated reasoning program OTTER — which is portable, available electronically by anonymous FTP, and usable on a wide variety of computers, even on personal computers. We discuss the types of assistance provided by OTTER, including proof finding, conjecture formulation, and object construction. With OTTER's assistance, we have answered a number of open questions taken from a variety of fields. We focus on such questions from combinatory logic, equivalential calculus, Robbins algebra, and finite semigroup theory. For those who enjoy a challenge, we also offer ten questions, including some that are still open, and an open question that eluded even Tarski.This work was supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research, US Department of Energy, under Contract W-31-109-Eng-38.  相似文献   

9.
A memorandum dated April 27, 1942, from the National Advisory Committee for Aeronautics, is reproduced. The memorandum describes a computing facility at the Langley Memorial Aeronautical Laboratory, in which a team of humans equipped with mechanical calculators was organized to assist in aeronautics research. The memorandum reveals much about the state of computing as it existed just before the invention of automatic digital computers, whose introduction would bring this era to a close.  相似文献   

10.
Trial by fire: teleoperated robot targets Chernobyl   总被引:1,自引:0,他引:1  
The blast that destroyed Unit 4 of the Chernobyl Nuclear Power Plant (CNPP) 12 years ago prompted a firestorm of scientific, technological, political, and economic proposals for managing the worst nuclear accident to date. Following a meeting of the G-7 nations-the United States, Canada, Britain, France, Italy, Germany, and Japan-and Ukrainian representatives, the US Department of Energy (DOE) and National Aeronautics and Space Administration (NASA) organized and funded a “dream team” of experts in robotics as well as computer hardware and software for the “Pioneer Project”. Pioneer is a specialized, tethered, bulldozer-like robot equipped with stereo vision for real-time 3D mapping, a core-drilling and sampling apparatus, and an array of radiation and other sensor tools for remotely investigating Unit 4. The team has scheduled Pioneer's deployment at Chernobyl for November 1998  相似文献   

11.
Sandia National Laboratories already lists the fastest computer in the world (ASCI Red) and the fastest home-assembled computer in the world (C-Plant) among its credentials. Now, the people at Sandia’s US Department of Energy National Security Laboratory are turning their attention toward another arena: developing an intelligent software agent capable of defending against network hackers and computer viruses.  相似文献   

12.
The exponential growth of computers and computer applications since the 1960s has not been matched with personnel capable of conducting involved computer crime investigations. There is a need for intensive training programmes in this area, following an interdisciplinary team approach modelled on the MBA Degree. An international clearing house for the creation and exchange of case studies for training Computer Crime Investigators is a primary need.  相似文献   

13.
This article discusses the computational structure of the most effective methods for factoring integers and the computer architectures—existing and used, proposed, and under construction—which efficiently perform the computations of these various methods. New developments in technology and in pricing of computers are making it possible to build powerful parallel machines, at relatively low cost, which can substantially outperform standard computers on specific types of computations. The intent of this article is to use factoring and computers for factoring to provoke general thought about this matching of computer architectures to algorithms and computations.The author's research at Louisiana State University was supported in part by the National Science Foundation and the National Security Agency under grants NSF DCR 83-115-80 and NSA MDA904-85-H-0006.  相似文献   

14.
《Computers & Fluids》1996,25(5):485-496
Solving the Navier-Stokes equations with detailed modeling of the transport and reaction terms remains at the present time a very difficult challenge. Direct simulations of two-dimensional reactive flows using accurate models for the chemical reactions generally require days of computing time on today's most powerful serial vector supercomputers. Up to now, realistic three-dimensional simulations remain practically impossible. Working with parallel computers seems to be at the present time the only possible solution to investigate more complicated problems at acceptable costs, however, lack of standards on parallel architectures constitutes a real obstacle. In this paper, we describe the structure of a parallel two-dimensional direct simulation code using detailed transport, thermodynamic and reaction models. Separating the modules controlling the parallel work from the flow solver, it is possible to get a high compatibility degree between parallel computers using distributed memory and message-passing communication. A dynamic load-balancing procedure is implemented in order to optimize the distribution of the load among the different nodes. Efficiencies obtained with this code on many different architectures are given. First examples of application conceding the interaction between vortices and a diffusion flame are shown in order to illustrate the possibilities of the solver.  相似文献   

15.
Particle accelerators play an increasingly important role in basic and applied science. Several countries are involved in efforts aimed at developing accelerator-related technologies to support different application domains, including high-energy and nuclear physics, material science, biological science, and military use. The technological challenges associated with designing the next generation of accelerators will require numerical modeling capabilities far beyond those normally used within the accelerator community. In 1997 the US Department of Energy initiated a Grand Challenge in Computational Accelerator Physics, primarily to develop a new generation of high-performance accelerator modeling tools and apply them to projects of national importance. These tools will have a major impact on reducing the cost and technical risk of future projects, as well as maximizing the performance of present and future accelerators. In addition, they will enable the simulation of problems three to four orders of magnitude larger than ever done before. The use of algorithms and software optimized for high-performance computing will make it possible to obtain results quickly and with very high accuracy. This work is being done in collaboration between Los Alamos National Laboratory (LANL), Stanford Linear Accelerator Center, the National Energy Research Scientific Computing Center, Stanford University, and the University of California at Los Angeles. This article focuses on the accelerator simulation model and the current techniques used to visualize the project results  相似文献   

16.
生物信息学和化学信息学均需要利用计算机和网络作为其研究平台,从网络中获取数据,利用远程计算机完成计算分析任务;同时,生物信息学和化学信息学软件分布在Unix、Linux和Windows等不同的操作系统平台上。因此,对从事生物信息与化学信息学研究的工作者来说,拥有一个通用的跨系统的网络研究平台将会如虎添翼。本论文研究立足于山东省生物信息工程技术研究中心现有的网络硬件设备,通过网络架构的构建和软件设置建立了一个开放、方便、实用的网络研究平台。利用该平台,用户可以在指定网域内任意一台计算机上获取研究数据,提交计算、研究任务,也可以借助任意一台Windows或Linux PC工作站图形远程登录中心的UNIX/Linux/Windows服务器进行工作,实现校内的数据、计算、图形等资源的共享,节省了网络和计算资源,方便了研究工作的开展。  相似文献   

17.
As part of the recent focus on increasing the productivity of parallel application developers, Co-array Fortran (CAF) has emerged as an appealing alternative to the Message Passing Interface (MPI). CAF belongs to the family of global address space parallel programming languages; such languages provide the abstraction of globally addressable memory accessed using one-sided communication. At Rice University we are developing caf c, an open source, multiplatform CAF compiler. Our earlier studies show that caf c-compiled CAF programs achieve similar performance to that of corresponding MPI codes for the NAS Parallel Benchmarks. In this paper, we present a study of several CAF implementations of Sweep3D on four modern architectures. We analyze the impact of using one-sided communication in Sweep3D, identify potential sources of inefficiencies and suggest ways to address them. Our results show that we achieve comparable performance to that of the MPI version on three cluster-based architectures and outperform it by up to 10 % on the SGI Altix 3000. This work was supported in part by the Department of Energy under Grant DE-FC03-01ER25504/A000, the Los Alamos Computer Science Institute (LACSI) through LANL contract number 03891-99-23 as part of the prime contract (W-7405-ENG-36) between the DOE and the Regents of the University of California, Texas Advanced Technology Program under Grant 003604-0059-2001, and Compaq Computer Corporation under a cooperative research agreement. This research was performed in part using the Molecular Science Computing Facility (MSCF) in the William R. Wiley Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the U.S. Department of Energy’s Office of Biological and Environmental Research and located at the Pacific Northwest National Laboratory. Pacific Northwest is operated for the Department of Energy by Battelle. The computations were performed in part on an Itanium cluster purchased with support from the NSF under Grant EIA-0216467, Intel, and Hewlett Packard and on the National Science Foundation Terascale Computing System at the Pittsburgh Supercomputing Center. Cristian Coarfa and Yuri Dotsenko contributed equally to this work.  相似文献   

18.
Energy consumption of parallel computers has been becoming the obstruction to higher-performance systems. In this paper, we focus on power optimization of high-performance interconnection networks for MPI applications in high-performance parallel computers. Compared with the past history-based work, we propose the idea of compiler-directed power-aware on/off network links. There are some idle intervals for network links during the execution of parallel applications, at which the links still consume large amounts of energy. Using on/off network links, compiler first divides load-balancing MPI applications into the communication intervals and the computation intervals, and then inserts the on/off instruction into the applications to switch the link state. To avoid the time overhead of state switching, we use a time estimation technique to analyze the computation time, and insert the on instruction before reaching the communication intervals. Results from simulations and experiments show that the proposed compiler-directed method can reduce energy consumption of interconnection networks by 20∼70%, at a loss of less than 1% network latency and performance degradation.  相似文献   

19.
Determining the exact location of buried waste trenches is an important step in the characterization and remediation of certain hazardous waste sites. Remotely sensed data offers a rich source of information for accomplishing this task. This paper presents an investigation of buried waste trenches located at Oak Ridge National Laboratory (ORNL) using thermal remote sensing. A comparison of historical aerial photography and recently collected thermal imagery reveals a thermal signature which coincides with the precise locations of buried waste trenches. Statistical analysis of extensive ground measurements shows a clear thermal difference between the trench and control areas, with the trenches exhibiting cooler temperatures and greater soil moisture. By incorporating the imagery-derived information into site remediation plans, ORNL realized a cost avoidance of more than U.S. $5 000 000. Similar benefits can be anticipated at other DOE waste sites.  相似文献   

20.
Kirby W. Fong 《Software》1985,15(1):87-103
The National Magnetic Fusion Energy Computer Center (NMFECC) at the Lawrence Livermore National Laboratory (LLNL) has implemented a simple, yet powerful interactive operating system, the Cray Time-Sharing System (CTSS), on a Cray-1 supercomputer. CTSS augments the multi-programming batch facilities normally found in supercomputer systems with many of the interactive services typical of interactive minicomputer systems. This paper gives some of the historical background leading to CTSS and gives an overview of the system that emphasizes the strong points or unusual features such as multiple channels, decentralized control of resources, priorities and program scheduling, system recovery, and on-line documentation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号