共查询到20条相似文献,搜索用时 0 毫秒
1.
Arun Krishnan 《New Generation Computing》2004,22(2):111-125
The availability of powerful microprocessors and improvements in the performance of networks has enabled high performance
computing on wide-area, distributed systems. Computational grids, by integrating diverse, geographically distributed and essentially
heterogeneous resources provide the infrastructure for solving large-scale problems. However, heterogeneity, on the one hand
allows for scalability, but on the other hand makes application development and deployment for such an environment extremely
difficult.
The field of life sciences has been an explosion in data over the past decade. The data acquired needs to be processed, interpreted
and analyzed to be useful. The large resource needs of bioinformatics allied to the large number of data-parallel applications
in this field and the availability of a powerful, high performance, computing grid environment lead naturally to opportunities
for developing grid-enabled applications. This survey, done as part of the Life Sciences Research Group (a research group
belonging to the Global Grid Forum) attempts to collate information regarding grid-enabled applications in this field.
Arun Krishnan, Ph.D.: He did his undergraduate in Electrochemical Engineering in the Central Electrochemical Research Institute in India and went
on to do his Ph.D. in Advanced Process Control from the University of South Carolina. He then worked in the control and high
performance computing industries for about 3 years before moving to the Bioinformatics Institute in Singapore. He is currently
a Young Investigator for the Distributed Computing in Biomedicine Group at BII. His research interests include parallel and
distributed computing with special emphasis on grid computing and its application to the biomedical area. He is also interested
in developing parallel algorithms for sequence analysis and protein structure prediction. 相似文献
2.
Daniel Lacks Author Vitae 《Journal of Systems and Software》2009,82(1):89-100
In this work, first, we present a grid resource discovery protocol that discovers computing resources without the need for resource brokers to track existing resource providers. The protocol uses a scoring mechanism to aggregate and rank resource provider assets and Internet router data tables (called grid routing tables) for storage and retrieval of the assets. Then, we discuss the simulation framework used to model the protocol and the results of the experimentation. The simulator utilizes a simulation engine core that can be reused for other network protocol simulators considering time management, event distribution, and a simulated network infrastructure. The techniques for constructing the simulation core code using C++/CLR are also presented in this paper. 相似文献
3.
Distribution of data and computation allows for solving larger problems and executing applications that are distributed in nature. The grid is a distributed computing infrastructure that enables coordinated resource sharing within dynamic organizations consisting of individuals, institutions, and resources. The grid extends the distributed and parallel computing paradigms allowing for resource negotiation and dynamical allocation, heterogeneity, open protocols, and services. Grid environments can be used both for compute-intensive tasks and data intensive applications by exploiting their resources, services, and data access mechanisms. Data mining algorithms and knowledge discovery processes are both compute and data intensive, therefore the grid can offer a computing and data management infrastructure for supporting decentralized and parallel data analysis. This paper discusses how grid computing can be used to support distributed data mining. Research activities in grid-based data mining and some challenges in this area are presented along with some promising future directions for developing grid-based distributed data mining. 相似文献
4.
Jarosław Wypychowski Jarosław Pytliński Łukasz Skorwider Mirosław Nazaruk Krzysztof Benedyczak Michał Wroński Piotr Bała 《New Generation Computing》2004,22(2):147-156
In this paper we describe deployment of most important life sciences applications on the grid. The build grid is heterogenous
and consist of systems of different architecture as well as operating systems and various middleware. We have used UNICORE
infrastructure as framework for development dedicated user interface to the number of existing computational chemistry codes
and molecular biology databases. Developed solution allows for access to the resources provided with UNICORE as well as Globus
with exactly the same interface which gives access to the general grid functionality such as single login, job submission
and control mechanism.
Jarosław Wypychowski: He is a student at the Faculty of Mathematics and Computer Science, Warsaw University, Poland. He is involved in the development
of grid tools. He has been working as programmer in the private company.
Jarosław Pytliński, M.Sc.: He received his M.Sc. in 2002 from Department of Mathematic and Computer Science of Nicolaus Copernicus University in Torun.
His thesis on “Quantum Chemistry Computations in Grid Environment” was distincted in XIX Polish Contest for the best M.Sc.
Thesis of Computer Science. He also worked in Laboratory of High Performance Systems at UCI, Torun. His interests are Artificial
Intelligence and GRID technology.
Łukasz Skorwider, M.Sc.: He is programmer in the private pharmaceutical company. He obtained M.Sc. degree from the Faculty of Mathematics and Computer
Science N. Copernicus University. As graduate student he was involved in the development of grid tools for drug design. His
private and professional interest is Internet technology.
Mirosław Nazaruk, M.Sc.: He is a senior computer and network administrator at ICM Warsaw University. He provides professional support for the users
of the high performance facilities located at the ICM. He obtained M.Sc. in Computer Science from Warsaw University in 1991.
Before joining ICM, he was a member of technical staff at Institute of Applied Mathematics, Warsaw University.
Krzysztof Benedyczak: He is a student at the Faculty of Mathematics and Computer Science, N. Copernicus University, Torun, Poland. He is involved
in the development of grid tools.
Michał Wroński: He is a student at the Faculty of Mathematics and Computer Science, N. Copernicus University, Torun, Poland. He is involved
in the development of grid tools.
Piotr Bała, Ph.D.: He is an adiunkt at Faculty of Mathematics and Computer Science N. Copernicus University, Torun, Poland, and tightly cooperates
with ICM, Warsaw University. He obtained Ph.D. in Physics in 1993 in Institute of Physics, N. Copernicus University and in
2000 habilitation in physics. From 2001 he was appointed director of Laboratory of Parallel and Distributed Processing at
Faculty of Mathematics, N. Copernicus University. His main research interest is development and application of Quantum-Classical
Molecular Dynamics and Approximated Valence Bond method to study of enzymatic reactions in biological systems. In the last
few years, he has been involved in development of parallel and grid tools for large scale scientific applications. 相似文献
5.
With advances in remote-sensing technology, the large volumes of data cannot be analyzed efficiently and rapidly, especially with arrival of high-resolution images. The development of image-processing technology is an urgent and complex problem for computer and geo-science experts. It involves, not only knowledge of remote sensing, but also of computing and networking. Remotely sensed images need to be processed rapidly and effectively in a distributed and parallel processing environment. Grid computing is a new form of distributed computing, providing an advanced computing and sharing model to solve large and computationally intensive problems. According to the basic principle of grid computing, we construct a distributed processing system for processing remotely sensed images. This paper focuses on the implementation of such a distributed computing and processing model based on the theory of grid computing. Firstly, problems in the field of remotely sensed image processing are analyzed. Then, the distributed (and parallel) computing model design, based on grid computing, is applied. Finally, implementation methods with middleware technology are discussed in detail. From a test analysis of our system, TARIES.NET, the whole image-processing system is evaluated, and the results show the feasibility of the model design and the efficiency of the remotely sensed image distributed and parallel processing system. 相似文献
6.
Qunying HuangChaowei Yang 《Computers & Geosciences》2011,37(2):165-176
Many geographic analyses are very time-consuming and do not scale well when large datasets are involved. For example, the interpolation of DEMs (digital evaluation model) for large geographic areas could become a problem in practical application, especially for web applications such as terrain visualization, where a fast response is required and computational demands exceed the capacity of a traditional single processing unit conducting serial processing. Therefore, high performance and parallel computing approaches, such as grid computing, were investigated to speed up the geographic analysis algorithms, such as DEM interpolation. The key for grid computing is to configure an optimized grid computing platform for the geospatial analysis and optimally schedule the geospatial tasks within a grid platform. However, there is no research focused on this. Using DEM interoperation as an example, we report our systematic research on configuring and scheduling a high performance grid computing platform to improve the performance of geographic analyses through a systematic study on how the number of cores, processors, grid nodes, different network connections and concurrent request impact the speedup of geospatial analyses. Condor, a grid middleware, is used to schedule the DEM interpolation tasks for different grid configurations. A Kansas raster-based DEM is used for a case study and an inverse distance weighting (IDW) algorithm is used in interpolation experiments. 相似文献
7.
Nazareno Andrade Francisco Brasileiro Walfredo Cirne Miranda Mowbray 《Journal of Parallel and Distributed Computing》2007
Currently, most computational grids (systems allowing transparent sharing of computing resources across organizational boundaries) are assembled using human negotiation. This procedure does not scale well, and is too inflexible to allow for large open grids. Peer-to-peer (P2P) grids present an alternative way to build grids with many sites. However, to actually assemble a large grid, peers must have an incentive to provide resources to the system. In this paper we present an incentive mechanism called the Network of Favors, which makes it in the interest of each participating peer to contribute its spare resources. We show through simulations with up to 10,000 peers and experiments with software implementing the mechanism in a deployed system that the Network of Favors promotes collaboration in a simple, robust and scalable fashion. We also discuss experiences of using OurGrid, a grid based on this mechanism. 相似文献
8.
Wibke Sudholt Kim K. Baldridge David Abramson Colin Enticott Slavisa Garic 《New Generation Computing》2004,22(2):137-146
Computational modeling in the health sciences is still very challenging and much of the success has been despite the difficulties
involved in integrating all of the technologies, software, and other tools necessary to answer complex questions. Very large-scale
problems are open to questions of spatio-temporal scale, and whether physico-chemical complexity is matched by biological
complexity. For example, for many reasons, many large-scale biomedical computations today still tend to use rather simplified
physics/chemistry compared with the state of knowledge of the actual biology/biochemistry. The ability to invoke modern grid
technologies offers the ability to create new paradigms for computing, enabling access of resources which facilitate spanning
the biological scale.
Wibke Sudholt: She is a postdoc with J. A. McCammon and K. Baldridge at the University of California, San Diego and a fellow of the German
Academic Exchange Service (DAAD). She received her diploma (Dipl. Chem.) at the University Dortmund, Germany in 1996, and
her doctoral degree in 2001 (Dr. rer. nat.) at Heinrich-Heine-University Duesseldorf, Germany with Wolfgang Domcke on theoretical
studies of a charge-transfer process. Her current research interests include the combination of quantum chemistry, molecular
mechanics and continum electrostatics to describe chemical reactions in complex molecular systems.
Kim K. Baldridge: She is a theoretical and computational chemist with expertise in the design, development, and application of computational
quantum chemical methodology for understanding chemical and biochemical reaction processes of broad interest. Efforts include
development of computational tools and associated grid technologies for the broader scientific community. She is a Fellow
of the APS and AAAS, and was the 2000 Agnes Fay Morgan Awardee for Research Achievement in Chemistry. She is the Program Director
for Integrative Computational Sciences at SDSC, where she has worked since 1989, and additionally holds an adjunct professorship
at UCSD.
David Abramson: He is currently a professor of Computer Science in the School of Computer Science and Software Engineering (CSSE) at Monash
University, Australia. He is a project leader in the Co-operative Research Centre for Distributed Systems Nimrod Project and
Chief Investigator on two ARC funded research projects. He is a co-founder of Active Tools P/L with Dr. Rok Sosic, established
to commercialize the Nimrod project, and Guardsoft, focused on commercializing the Guard project. Abramson’s current interests
are in high performance computer systems design and software engineering tools for programming parallel, distributed supercomputers.
Colin Enticott: He completed a BComp (Hons) degree mid. 2002 at Monash University, Australia. His project, done under the supervision of
Professor David Abramson, “The Multi Site EnFuzion Client” dealt in the area of cluster-of-clusters computing that has lead
him into Grid computing. Currently employed by DSTC (Distributed Systems Technology Centre, Melbourne, Australia) working
on the user front-end of Nimrod (the Nimrod Portal) and cluster implementations.
Slavisa Garic: He completed Bachelor of Computer Science (Hons) degree at Monash University, Australia in November 2001. His project, “Suburban
Area Networks: Security” involved working on security aspects of wireless community and suburban networks. The beginning of
year 2002, he joined Distributed Systems Technology Centre, Melbourne Australia, where he currently works as a core Nimrod/G
developer. 相似文献
9.
Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules. 相似文献
10.
GridMD is a C++ class library intended for constructing simulation applications and running them in distributed environments. The library abstracts away from details of distributed environments, so that almost no knowledge of distributed computing is required from a physicist working with the library. She or he just uses GridMD function calls inside the application C++ code to perform parameter sweeps or other tasks that can be distributed at run-time. In this paper we briefly review the GridMD architecture. We also describe the job manager component which submits jobs to a remote system. The C++ source code of our PBS job manager may be used as a standalone tool and it is freely available as well as the full library source code. As illustrative examples we use simple expression evaluation codes and the real application of Coulomb cluster explosion simulation by Molecular Dynamics. 相似文献
11.
网格:面向虚拟组织的资源共享技术 总被引:15,自引:3,他引:15
1.引言在过去的几年里,“网格”(Grid)一词主要在学术界使用。如今,它已从幕后走到了前台,在IT界引起人们的普遍关注。术语网格源于学术界的学派描述共享联网的所有资源(从PC到超级计算机),以共同地解决超级计算任务。一般认为,有关网格的实质性研究始于1995年。当时美国Argonne国家实验室的Ian Foster博士和南加洲大学信息科学研究所的Carl Kesselman博士共同领导了美国政府(能源部、NASA等)支持的高性能分布式计算项目Globus。在此项目中,为了标 相似文献
12.
Over the last few years, the adaptation ability has become an essential characteristic for grid applications due to the fact that it allows applications to face the dynamic and changing nature of grid systems. This adaptive capability is applied within different grid processes such as resource monitoring, resource discovery, or resource selection. In this regard, the present approach provides a self-adaptive ability to grid applications, focusing on enhancing the resources selection process. This contribution proposes an Efficient Resources Selection model to determine the resources that best fit the application requirements. Hence, the model guides applications during their execution without modifying or controlling grid resources. Within the evaluation phase, the experiments were carried out in a real European grid infrastructure. Finally, the results show that not only a self-adaptive ability is provided by the model but also a reduction in the applications’ execution time and an improvement in the successfully completed tasks rate are accomplished. 相似文献
13.
Hyoung-Gon Lee Author Vitae Author Vitae Han-Il Jeong Author Vitae Author Vitae 《Journal of Systems and Software》2009,82(7):1087-1097
The material requirement planning (MRP) process is crucial when software packages, like enterprise resource planning (ERP) software, are used in the production planning for manufacturing enterprises to ensure that appropriate quantities of raw materials and subassemblies are provided at the right time. Whereas little attention has been paid to the architectural aspects of MRP process in academic studies, in practice, reports are often made of its time consuming characteristics due to intensive interactions with databases and difficulty in real time processing. This paper proposes a grid enabled MRP process in a distributed database environment and demonstrates the performance improvement of the proposed process by a simulation study. 相似文献
14.
In this paper, we consider multiple QoS based grid resource scheduling. Each of grid task agent's diverse requirements is modeled as a quality of service (QoS) dimension, associated with each QoS dimension is a utility function that defines the benefit that is perceived by a user with respect to QoS choices in that dimension. The objective of multiple QoS based grid resource scheduling is to maximize the global utility of the scheduling system. 相似文献
15.
Rémi Bertin Sascha Hunold Arnaud Legrand Corinne Touati 《Journal of Parallel and Distributed Computing》2014
Large scale distributed systems typically comprise hundreds to millions of entities (applications, users, companies, universities) that have only a partial view of resources (computers, communication links). How to fairly and efficiently share such resources between entities in a distributed way has thus become a critical question. 相似文献
16.
Using the Internet, “public” computing grids can be assembled using “volunteered” PCs. To achieve this, volunteers download and install a software application capable of sensing periods of low local processor activity. During such times, this program on the local PC downloads and processes a subset of the project's data. At the completion of processing, the results are uploaded to the project and the cycle repeats. 相似文献
17.
M. Arroqui J. Rodriguez Alvarez H. Vazquez C. Machado C. Mateos A. Zunino 《Concurrency and Computation》2015,27(17):4716-4740
The Grid Computing paradigm aims to create a ‘virtual’ and powerful single computer with many distributed resources to solve resource intensive problems. The term ‘gridification’ involves the process of transforming a conventional application to run in a Grid environment. In that sense, the more automatic this process is, the easier is for developers with low expertise in parallel and distributed computing to take advantage of these resources. To date, many semiautomatic gridifiers were built to support different gridification approaches and application code structures or anatomies. Furthermore, agricultural simulation applications have a particular common anatomy based on biophysical entities, such as animals, crops, and pastures, which are updated by actions, such as growing animals, growing crops, and growing pastures, along simulation execution. However, this anatomy is not fully supported by any of the existing gridifiers. Thus, this paper presents Agricultural Simulation Applications Gridifier (ASAG), a method for easy gridification of agricultural simulation applications, and its Java implementation, named Java ASAG (JASAG). The main design drivers of JASAG are middleware independence, separation of business logic and Grid behavior, and performance increase. An experimental evaluation showing the feasibility of the gridification method and its implementation is also reported, which resulted in speedups of up to 25 by using a real agricultural simulation application. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
18.
19.
Grid computing connects heterogeneous resources to achieve the illusion of being a single available entity. Charging for these
resources based on demand is often referred to as utility computing, where resource providers lease computing power with varying costs based on processing speed. Consumers using this resource
have time and cost constraints associated with each job they submit. Determining the optimal way to divide the job among the
available resources with regard to the time and cost constraints is tasked to the Grid Resource Broker (GRB). The GRB must use an optimization algorithm that returns an accurate result in a timely manner. The genetic algorithm and the simulated annealing algorithm can both be used to achieve this goal, although simulated annealing outperforms the genetic algorithm for use by
the GRB. Determining optimal values for the variables used in each algorithm is often achieved through trial and error, and
success depends upon the solution domain of the problem.
相似文献
Sanjay P. Ahuja (Corresponding author)Email: |
20.
Eugen FellerAuthor Vitae John Mehnert-SpahnAuthor Vitae Michael SchoettnerAuthor Vitae Christine MorinAuthor Vitae 《Future Generation Computer Systems》2012,28(1):163-170
The EU-funded XtreemOS project implements an open-source grid operating system based on Linux. In order to provide fault tolerance and migration for grid applications, it integrates a distributed grid-checkpointing service called XtreemGCP. This service is designed to support various checkpointing protocols and different checkpointer packages (e.g. BLCR, LinuxSSI, OpenVZ, etc.) in a transparent manner through a uniform checkpointer interface. In this paper, we present the integration of a backward error recovery protocol based on independent checkpointing into the XtreemGCP service. The solution we propose is not checkpointer bound and thus can be transparently used on top of any checkpointer package.To evaluate the prototype we run it within a heterogeneous environment composed of single-PC nodes and a Single System Image (SSI) cluster. The experimental results demonstrate the capability of the XtreemGCP service to integrate different checkpointing protocols and independently checkpoint a distributed application within a heterogeneous grid environment. Moreover, the performance evaluation also shows that our solution outperforms the existing coordinated checkpointing protocol in terms of scalability. 相似文献