首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Supercomputers are prevalent and vital to scientific research and industrial fields, and may be used to represent the level of national scientific development. A summary of the evolution of supercomputers will help direct the future development of supercomputers and supercomputing applications. In this paper, we summarize the accomplishments in supercomputing, predict the trend of future supercomputers, and present several breakthroughs in supercomputer architecture research.  相似文献   

2.
Supercomputers are prevalent and vital to scientific research and industrial fields, and may be used to represent the level of national scientific development. A summary of the evolution of supercomputers will help direct the future development of supercomputers and supercomputing applications. In this paper, we summarize the accomplishments in supercomputing, predict the trend of future supercomputers, and present several breakthroughs in supercomputer architecture research.  相似文献   

3.
The last decade has seen a substantial increase in commodity computer and network performance, mainly as a result of faster hardware and more sophisticated software. Nevertheless, there are still problems, in the fields of science, engineering, and business, which cannot be effectively dealt with using the current generation of supercomputers. In fact, due to their size and complexity, these problems are often very numerically and/or data intensive and consequently require a variety ofheterogeneous resources that are not available on a single machine. A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources unified to act as a single powerful computer. This new approach is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently peer‐to‐peer or Grid computing. The early efforts in Grid computing started as a project to link supercomputing sites, but have now grown far beyond their original intent. In fact, many applications can benefit from the Grid infrastructure, including collaborative engineering, data exploration, high‐throughput computing, and of course distributed supercomputing. Moreover, due to the rapid growth of the Internet and Web, there has been a rising interest in Web‐based distributed computing, and many projects have been started and aim to exploit the Web as an infrastructure for running coarse‐grained distributed and parallel applications. In this context, the Web has the capability to be a platform for parallel and collaborative work as well as a key technology to create a pervasive and ubiquitous Grid‐based infrastructure. This paper aims to present the state‐of‐the‐art of Grid computing and attempts to survey the major international efforts in developing this emerging technology. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
《Computer》2004,37(2):10-13
Although grid computing - which links disparate machines so that they can function as a distributed supercomputer - has become an increasingly popular focus of high-performance-computing research, the traditional supercomputing industry has languished until recently. This occurred largely because the easing of the Cold War in the early 1990s reduced government use of and spending on supercomputer technology. However, the industry is now reviving because of the development of low-cost supercomputer clusters that use commodity chips. Clusters of commodity computers linked by highspeed interconnect technologies, such as InfiniBand, have put supercomputers within reach of new users. Meanwhile, longtime supercomputer vendors like Cray are making a comeback with systems based on vector units, powerful CPUs dedicated to floating-point and matrix calculations. Vector machines satisfy the demand by governments and large industries for supercomputers that can conduct complex tasks such as nuclear-weapons simulations, pharmaceutical drug modeling, mining of large data sets, and geological analysis to find oil deposits.  相似文献   

5.
High Performance Computing (HPC) was born in the mid 70’s with the emergence of vector supercomputers. It has then evolved according to technology and business enlarging progressively its scope of application. In this paper, we describe the fundamental concepts at the core of HPC, their evolution, the way they are used today in real applications, how these applications are evolving and how application and technology are transforming business.   相似文献   

6.
本文简要介绍了当前国外超级计算机的实际应用水平,描述国外受应用需求推动而制定的超级计算机战略开发计划,探讨超级计算机领域的某些发展动向。  相似文献   

7.
《Micro, IEEE》1993,13(1):67-70
Japan's Ministry of International Trade and Industry's (MITI's) Superspeed project, which investigated high-speed devices and computer architecture, algorithms, and languages for parallel computing, is reviewed. The supercomputing industries in Japan and the United States are compared. The architecture and performance of current supercomputers and the current states of supercomputer technology and supercomputer software are discussed  相似文献   

8.
超算系统大多是基于Linux操作系统搭建的,限制了基于Windows操作系统的应用软件使用。此外,超算系统操作的高门槛使不熟悉Linux操作系统的用户望而却步,造成超算系统用户流失。基于Linux超算系统环境,探索兼顾超算系统运维管理便利性的Windows应用程序使用方法。研究通过X11转发、Wine和虚拟化等技术,为用户提供兼容超算作业调度系统Windows应用程序运行环境,同时提供安全、稳定的用户个人文件访问方法。所采用的配置方法与实例,可为具有类似需求的超算中心提供解决方案,从而拓宽用户软件应用范围,提高用户满意度。  相似文献   

9.
There has been much talk over the past two decades about commercialization of the mobile ad hoc network (MANET) technology. Potential ad hoc network applications with some commercial appeal are now finally emerging, “drafted” by the enormously successful wireless LAN technology. Closely coupled to commercial applications and critically dependent on commercial ad hoc networks will be the “pervasive computing” applications. Since military and civilian emergency MANETs have been around for over three decades, and since the Government has continuously supported MANET research for as many years, it may seem natural to assume that all the research has already been done and that commercial MANETs can be deployed by simply leveraging the military and civilian research results. Unfortunately, there is a catch. Commercial MANETs (and therefore pervasive computing applications) will evolve in a way totally different from their military counterparts. Most importantly, they will start small, and will initially be tethered to the Internet. They will be extremely cost-aware. They will also need to cater to a variety of different applications. This is in sharp contrast with the large scale, autonomous, special purpose and cost insensitive military networks. In this paper we review a typical “battlefield” MANET application and contrast it to two emerging commercial MANET scenarios—the urban vehicle grid and the Campus network. We compare characteristics and design goals and make the case for new research to help kick off commercial MANETs. In particular we argue that P2P technology will be critical in the early evolution of commercial MANETs and identify research directions for P2P MANETs.  相似文献   

10.
High-performance computing in finance: The last 10 years and the next   总被引:2,自引:0,他引:2  
Almost two decades ago supercomputers and massively parallel computers promised to revolutionize the landscape of large-scale computing and provide breakthrough solutions in several application domains. Massively parallel processors achieve today terraFLOPS performance – trillion floating point operations per second – and they deliver on their promise. However, the anticipated breakthroughs in application domains have been more subtle and gradual. They came about as a result of combined efforts with novel modeling techniques, algorithmic developments based on innovative mathematical theories, and the use of high-performance computers that vary from top-range workstations, to distributed networks of heterogeneous processors, and to massively parallel computers. An application that benefited substantially from high-performance computing is that of finance and financial planning. The advent of supercomputing coincided with the so-called “age of the quants” in Wall Street, i.e., the mathematization of problems in finance and the strong reliance of financial managers on quantitative analysts. These scientists, aided by mathematical models and computer simulations, aim at a better understanding of the peculiarities of the financial markets and the development of models that deal proactively with the uncertainties prevalent in these markets. In this paper we give a modest synthesis of the developments of high-performance computing in finance. We focus on three major developments: (1) The use of Monte Carlo simulation methods for security pricing and Value-at-Risk (VaR) calculations; (2) the development of integrated financial product management tools and practices – also known as integrative risks management or enterprise-wide risk management, and (3) financial innovation and the computer-aided design of financial products.  相似文献   

11.
Woodward  P.R. 《Computer》1996,29(10):99-111
I am fortunate to have had access to supercomputers for the last 28 years. Over this time I have used them to simulate time-dependent fluid flows in the compressible regime. Strong shocks and unstable multifluid boundaries, along with the phenomenon of fluid turbulence, have provided the simulation complexity that demands supercomputer power. The supercomputers I have used-the CDC 6600, 7600, and Star-100, the Cray-1, Cray-XMP, Cray-2, and Cray C-90, the Connection Machines CM-2 and CM-5, the Cray T3D, and the Silicon Graphics Challenge Array and Power Challenge Array-span three revolutions in supercomputer design: the introduction of vector supercomputing, parallel supercomputing on multiple CPUs, and supercomputing on hierarchically organized clusters of microprocessors with cache memories. The last revolution is still in progress, so its outcome is somewhat uncertain. I view these design revolutions through the prism of my specialty and through applications of the supercomputers I have used. Also, because these supercomputer design changes have driven equally important changes in numerical algorithms and the programs that implement them, I describe the three revolutions from this perspective  相似文献   

12.
There are now over thirty supercomputers in use in Europe. In this short communication, we consider their distribution both by geographical location and by the principal activity of each site. The latter leads naturally to an estimate of the amount of supercomputer usage in the major areas of scientific research.  相似文献   

13.
This study explores a teaching method for improving business students’ skills in e-commerce page evaluation and making Web design majors aware of business content issues through cooperative learning. Two groups of female students at a Japanese university studying either tourism or Web page design were assigned tasks that required cooperation to investigate whether a minimum of formal training and interaction between the two groups would result in an increase in the “design” students’ awareness of content issues in page design, and an improvement in the “tourism” students’ ability to evaluate Web pages related to tourism. The results showed only slight improvements, suggesting that either the amount of cooperative learning must be increased or some formal instruction must be introduced.  相似文献   

14.
This paper reviews some relationships between training and formal education, and discusses ways in which education for “Computer Literacy” might provide foundations for subsequent training and retraining in the face of an increasingly automated and information-rich environment. Computer Literacy is defined in terms of learning experiences at all levels of education which contribute to general technological awareness, to familiarity with routine applications of computers and understanding of their potential for human emancipation and to practical problem-solving skills based on creative use of computers and information technology. The effects of different levels of investment available for motivated training and education are considered. Close co-operation between education and training sectors is advocated.  相似文献   

15.
《IT Professional》2008,10(1):17-23
Energy-efficient (green) supercomputing has traditionally been viewed as passe, even to the point of public ridicule. But today, it's finally coming into vogue. This article describes the authors' view of this evolution. Ignoring power consumption as a design constraint will result in supercomputing systems with high operational costs and diminished reliability. Fortunately, improving energy efficiency in supercomputers (and computer systems in general) has become an emergent task for the IT industry. Hopefully, this article inspires further innovations and breakthroughs by providing a historical perspective. From Green Destiny to BG/L to innovative power-aware supercomputing prototypes, we envision that holistic power-aware technologies will be available and largely exploited in most, if not all, future supercomputing systems.  相似文献   

16.
If we were to have a Grid infrastructure for visualization, what technologies would be needed to build such an infrastructure, what kind of applications would benefit from it, and what challenges are we facing in order to accomplish this goal? In this survey paper, we make use of the term ‘visual supercomputing’ to encapsulate a subject domain concerning the infrastructural technology for visualization. We consider a broad range of scientific and technological advances in computer graphics and visualization, which are relevant to visual supercomputing. We identify the state‐of‐the‐art technologies that have prepared us for building such an infrastructure. We examine a collection of applications that would benefit enormously from such an infrastructure, and discuss their technical requirements. We propose a set of challenges that may guide our strategic efforts in the coming years.  相似文献   

17.
党岗  程志全 《计算机科学》2013,40(3):133-135
目前,我国国家级超算中心大多采用“地方政府投资、以市场为导向开展应用”的建设思路,地方政府更关心 涉及本地企事业单位的高性能计算应用和服务,超算中心常被用于普通的应用,很难充分发挥超级计算的战略作用。 如何让超算中心这艘能力超强的航母生存下来,进而“攻城掠地”,推动技术创新,一直是业内人士研究的课题。初步 探讨了国内超算中心核心应用所面临的挑战,提出了超算中心核心应用服务地方建设的几点建议。  相似文献   

18.
高性能计算技术以加速度迅猛发展,继千万亿次系统研制成功以后,超级计算机的性能又快速提升至数万万亿次,国际学术界与工业界普遍预期在2018年左右将出现极大规模并行的百万万亿次系统(Exascale Computing,简称E级系统)。本文从最新一届TOP500榜单入手分析了超级计算领域的技术动态,在此基础上,探讨了未来E级系统的发展趋势及其所面临的能耗、可扩展、可靠性和可编程性等关键技术问题。  相似文献   

19.
空间探测中基于COTS部件的软件容错技术   总被引:1,自引:1,他引:0       下载免费PDF全文
随着航天活动的发展,空间探测任务对于高性能计算的需求越来越明显,高性能的空间超级计算机成为决定下一代空间探测计划成败的关键之一。专用的防辐射计算部件不仅价格昂贵,而且在性能上远远落后于同时代的商用部件(COTS)。使用软件容错技术在COTS部件上搭建空间超级计算机,在达到和专用防辐射部件同样的容错效果的前提下,能够大幅度降低成本,提高性能和性能/功耗比。美国国家宇航局和斯坦福大学的实验已经验证,使用COTS部件有助于实现低成本高效能的下一代空间科学探测计划。  相似文献   

20.
基于高速网络的广域高性能并行与分布式计算   总被引:1,自引:0,他引:1       下载免费PDF全文
本文试图说明以下观点:越来越多的高性能应用要求利用地理上分布的、各式各样的计算和数据资源。这些应用希望能够通过高速网络将地理上分布、异构的各种高性能计算机、数据服务器、大型检索存储系统和可视化、虚拟现实系统等连接并集成起来,形成一个网络虚拟计算机(称为元计算机),来实现应用计算问题。这种元计算实质上就是基于高速网络的广域高性能并行与分布式计算。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号