首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multiple query optimization (MQO) in the cloud has become a promising research direction due to the popularity of cloud computing, which runs massive data analysis queries (jobs) routinely. These CPU/IO intensive analysis queries are complex and time-consuming but share common components. It is challenging to detect, share and reuse the common components among thousands of SQL-like queries. Previous solutions to MQO, heuristic or genetic based, are not appropriate for the large growing query set situation. In this paper, we develop a sharing system called LSShare using our proposed Lineage-Signature approach. By LSShare, we can efficiently solve the MQO problem in a recurring query set situation in the cloud. Our system has been prototyped in a distributed system built for massive data analysis based on Alibaba’s cloud computing platform (http://www.alibaba.com/). Experimental results on real data sets demonstrate the efficiency and effectiveness of the proposed approach.  相似文献   

2.
This paper shows two examples of how the analysis of option pricing problems can lead to computational methods efficiently implemented in parallel. These computational methods outperform ??general purpose?? methods (i.e., for example, Monte Carlo, finite differences methods). The GPU implementation of two numerical algorithms to price two specific derivatives (continuous barrier options and realized variance options) is presented. These algorithms are implemented in CUDA subroutines ready to run on Graphics Processing Units (GPUs) and their performance is studied. The realization of these subroutines is motivated by the extensive use of the derivatives considered in the financial markets to hedge or to take risk and by the interest of financial institutions in the use of state of the art hardware and software to speed up the decision process. The performance of these algorithms is measured using the (CPU/GPU) speed up factor, that is using the ratio between the (wall clock) times required to execute the code on a CPU and on a GPU. The choice of the reference CPU and GPU used to evaluate the speed up factors presented is stated. The outstanding performance of the algorithms developed is due to the mathematical properties of the pricing formulae used and to the ad hoc software implementation. In the case of realized variance options when the computation is done in single precision the comparisons between CPU and GPU execution times gives speed up factors of the order of a few hundreds. For barrier options, the corresponding speed up factors are of about fifteen, twenty. The CUDA subroutines to price barrier options and realized variance options can be downloaded from the website http://www.econ.univpm.it/recchioni/finance/w13. A?more general reference to the work in mathematical finance of some of the authors and of their coauthors is the website http://www.econ.univpm.it/recchioni/finance/.  相似文献   

3.
4.
ZENTURIO [R. Prodan and T. Fahringer, ZENTURIO: A Grid Middleware-based Tool for Experiment Management of Parallel and Distributed Applications, Journal of Parallel and Distributed Computing, 2003. http://authors.elsevier.com/sd/article/S0743731503001977 (to appear)] is a semi-automatic experiment management tool for performance and parameter studies of parallel and distributed applications on cluster and Grid architectures. ZENTURIO has been designed as an Open Grid Services Architecture (OGSA) – compliant Grid application built on top of standard Web and Grid services technologies. In this paper we first comparatively present various issues from our transition to an Open Grid Services Infrastructure (OGSI) – compliant prototype. Then we introduce a generic framework for solving NP-complete optimisation problems for parallel and Grid applications. We present a case study on high throughput scheduling for large sets of computational tasks on the Grid using genetic algorithms. Our algorithm has a complexity of and delivers a fivefold improvement in solution over 500 generations in a Grid with uniformly distributed computational resources. This research is supported by the Austrian Science Fund as part of the Aurora project under contract SFBF1104.  相似文献   

5.
Structural bioinformatics applies computational methods to analyze and model three-dimensional molecular structures. There is a huge number of applications available to work with structural data on large scale. Using these tools on distributed computing infrastructures (DCIs), however, is often complicated due to a lack of suitable interfaces. The MoSGrid (Molecular Simulation Grid) science gateway provides an intuitive user interface to several widely-used applications for structural bioinformatics, molecular modeling, and quantum chemistry. It ensures the confidentiality, integrity, and availability of data via a granular security concept, which covers all layers of the infrastructure. The security concept applies SAML (Security Assertion Markup Language) and allows trust delegation from the user interface layer across the high-level middleware layer and the Grid middleware layer down to the HPC facilities. SAML assertions had to be integrated into the MoSGrid infrastructure in several places: the workflow-enabled Grid portal WS-PGRADE (Web Services Parallel Grid Runtime and Developer Environment), the gUSE (Grid User Support Environment) DCI services, and the cloud file system XtreemFS. The presented security infrastructure allows a single sign-on process to all involved DCI components and, therefore, lowers the hurdle for users to utilize large HPC infrastructures for structural bioinformatics.  相似文献   

6.
The WeNMR (http://www.wenmr.eu) project is a European Union funded international effort to streamline and automate analysis of Nuclear Magnetic Resonance (NMR) and Small Angle X-Ray scattering (SAXS) imaging data for atomic and near-atomic resolution molecular structures. Conventional calculation of structure requires the use of various software packages, considerable user expertise and ample computational resources. To facilitate the use of NMR spectroscopy and SAXS in life sciences the WeNMR consortium has established standard computational workflows and services through easy-to-use web interfaces, while still retaining sufficient flexibility to handle more specific requests. Thus far, a number of programs often used in structural biology have been made available through application portals. The implementation of these services, in particular the distribution of calculations to a Grid computing infrastructure, involves a novel mechanism for submission and handling of jobs that is independent of the type of job being run. With over 450 registered users (September 2012), WeNMR is currently the largest Virtual Organization (VO) in life sciences. With its large and worldwide user community, WeNMR has become the first Virtual Research Community officially recognized by the European Grid Infrastructure (EGI).  相似文献   

7.
8.
While new infrastructures for large computational challenges begin to be widely accessible to researchers, computational codes need to be re-designed to exploit new facilities. The Grid and the cloud computing concepts are changing the computational resource distribution and availability, and much effort start to be made to develop new codes for a better exploitation of new resources. This paper presents an example of the use of Grid resources, based on gLite middleware, to run cosmological simulations, that, up to now, are normally executed on Supercomputers. We have also used the Grid to explore and visualize the dataset. We discuss non particular the performance of FLY a parallel code implementing the octal-tree algorithm introduced by J. Barnes and P. Hut to compute the gravitational field efficiently. It simulates the evolution of the collisionless component of the material content of our Universe. FLY was originally developed to run on mainframe systems using the one-side communication paradigm, but we are now presenting a modified version of the computational algorithm to exploit the Grid environment. We also integrated the data exploration and visualization process on the Grid, to obtain preliminary results using the distributed facilities.  相似文献   

9.
As the number of software vulnerabilities increases, the research on software vulnerabilities becomes a focusing point in information security. A vulnerability could be exploited to attack the information asset with the weakness related to the vulnerability. However, multiple attacks may target one software product at the same time, and it is necessary to rank and prioritize those attacks in order to establish a better defense. This paper proposes a similarity measurement to compare and categorize vulnerabilities, and a set of security metrics to rank attacks based on vulnerability analysis. The vulnerability information is retrieved from a vulnerability management ontology integrating commonly used standards like CVE (http://www.cve.mitre.org/), CWE (http://www.cwe.mitre.org/), CVSS (http://www.first.org/cvss/), and CAPEC (http://www.capec.mitre.org/). This approach can be used in many areas of vulnerability management to secure information systems and e-business, such as vulnerability classification, mitigation and patching, threat detection and attack prevention.  相似文献   

10.
11.
12.
计算网格(也称元计算系统)聚集地理上分散的资源进行大型的分布式高性能计算。PVM和MPI是广泛使用的并行编程环境,它们需要作为并行计算的基本构建而集成到元计算系统中去。论文针对元计算资源的动态性、分布性、性能多变性和结点异构性等特点,实现了一个自适应的、一体化的多编程环境。论文论述了该多编程环境的体系结构,并利用代理技术实现远程编译、发现资源、屏蔽异构和优化调度。  相似文献   

13.
Many attempts1, 7, 8, 35 have been made to overcome the limit imposed by the Turing Machine34 to realise general mathematical functions and models of (physical) phenomena.

They center around the notion of computability.

In this paper we propose a new definition of computability which lays the foundations for a theory of cybernetic and intelligent machines in which the classical limits imposed by discrete algorithmic procedures are offset by the use of continuous operators on unlimited data. This data is supplied to the machine in a totally parallel mode, as a field or wave.

This theory of machines draws its concepts from category theory, Lie algebras, and general systems theory. It permits the incorporation of intelligent control into the design of the machine as a virtual element. The incorporated control can be realized in many (machine) configurations of which we give three:

a) a quantum mechanical realization appropriate to a possible understanding of the quantum computer and other models of the physical microworld,

b) a stochastic realization based on Kolmogorov-Gabor theory leading to a possible understanding of generalised models of the physical or thermodynamic macroworld, and lastly

c) a classical mechanical realization appropriate lo the study of a new class of robots.

Particular applications at a fundamental level are cited in geometry, mathematics, biology, acoustics, aeronautics, quantum mechanics, general relativity and. Markov chains. The proposed theory therefore opens a new way towards understanding the processes that underlie intelligence.  相似文献   


14.
This article describes the Open Science Grid, a large distributed computational infrastructure in the United States which supports many different high-throughput scientific applications, and partners (federates) with other infrastructures nationally and internationally to form multi-domain integrated distributed systems for science. The Open Science Grid consortium not only provides services and software to an increasingly diverse set of scientific communities, but also fosters a collaborative team of practitioners and researchers who use, support and advance the state of the art in large-scale distributed computing. The scale of the infrastructure can be expressed by the daily throughput of around seven hundred thousand jobs, just under a million hours of computing, a million file transfers, and half a petabyte of data movement. In this paper we introduce and reflect on some of the OSG capabilities, usage and activities.  相似文献   

15.
This paper presents a flexible framework for parallel and easy-to-implement topology optimization using the Portable and Extendable Toolkit for Scientific Computing (PETSc). The presented framework is based on a standardized, and freely available library and in the published form it solves the minimum compliance problem on structured grids, using standard FEM and filtering techniques. For completeness a parallel implementation of the Method of Moving Asymptotes is included as well. The capabilities are exemplified by minimum compliance and homogenization problems. In both cases the unprecedented fine discretization reveals new design features, providing novel insight. The code can be downloaded from www.topopt.dtu.dk/PETSc.  相似文献   

16.
Many current international scientific projects are based on large scale applications that are both computationally complex and require the management of large amounts of distributed data. Grid computing is fast emerging as the solution to the problems posed by these applications. To evaluate the impact of resource optimisation algorithms, simulation of the Grid environment can be used to achieve important performance results before any algorithms are deployed on the Grid. In this paper, we study the effects of various job scheduling and data replication strategies and compare them in a variety of Grid scenarios using several performance metrics. We use the Grid simulator , and base our simulations on a world-wide Grid testbed for data intensive high energy physics experiments. Our results show that scheduling algorithms which take into account both the file access cost of jobs and the workload of computing resources are the most effective at optimising computing and storage resources as well as improving the job throughput. The results also show that, in most cases, the economy-based replication strategies which we have developed improve the Grid performance under changing network loads.  相似文献   

17.
Scientific applications are getting increasingly complex, e.g., to improve their accuracy by taking into account more phenomena. Meanwhile, computing infrastructures are continuing their fast evolution. Thus, software engineering is becoming a major issue to offer ease of development, portability and maintainability while achieving high performance. Component based software engineering offers a promising approach that enables the manipulation of the software architecture of applications. However, existing models do not provide an adequate support for performance portability of HPC applications. This paper proposes a low level component model (L \(^2\) C) that supports inter-component interactions for typical scenarios of high performance computing, such as process-local shared memory and function invocation (C++ and Fortran), MPI, and Corba. To study the benefits of using L \(^2\) C, this paper walks through an example of stencil computation, i.e. a structured mesh Jacobi implementation of the 2D heat equation parallelized through domain decomposition. The experimental results obtained on the Grid’5000 testbed and on the Curie supercomputer show that L \(^2\) C can achieve performance similar to that of native implementations, while easing performance portability.  相似文献   

18.
In the last years, many institutions have provided themselves with cluster and Grid infrastructures either for intensive computation or research objectives. Each infrastructure having its own and different management operating software, the integration of different platforms becomes a hard and complicated task. Solving the interoperability problem for a set of different computing infrastructures belonging to our institution in order to solve a computation intensive problem was our objective. The paper describes the solution that was applied and the experimental results obtained. The solution uses a platform based on a central bus shared by the involved system components for information exchange. In order to ensure that all computations will succeed, the solution includes cloud infrastructures to deal with situations in which the local computing resources pose some problems. Also a cloud based solution for the bus deployment is explored and empirically compared with a local deployment.  相似文献   

19.
The dynamic provisioning of virtualized resources offered by cloud computing infrastructures allows applications deployed in a cloud environment to automatically increase and decrease the amount of used resources. This capability is called auto-scaling and its main purpose is to automatically adjust the scale of the system that is running the application to satisfy the varying workload with minimum resource utilization. The need for auto-scaling is particularly important during workload peaks, in which applications may need to scale up to extremely large-scale systems. Both the research community and the main cloud providers have already developed auto-scaling solutions. However, most research solutions are centralized and not suitable for managing large-scale systems, moreover cloud providers’ solutions are bound to the limitations of a specific provider in terms of resource prices, availability, reliability, and connectivity. In this paper we propose DEPAS, a decentralized probabilistic auto-scaling algorithm integrated into a P2P architecture that is cloud provider independent, thus allowing the auto-scaling of services over multiple cloud infrastructures at the same time. Our experiments (simulations and real deployments), which are based on real service traces, show that our approach is capable of: (i) keeping the overall utilization of all the instantiated cloud resources in a target range, (ii) maintaining service response times close to the ones obtained using optimal centralized auto-scaling approaches.  相似文献   

20.
网格计算是一种能够整合零散资源并实现资源共享和协同工作的计算模式;云计算是网格计算、并行计算、分布式计算的发展,是一种新兴的商业计算模式。它具有与网格计算不同的新的特点。该文在研究网格计算与云计算概念的基础上从体系结构、专注方向、资源管理、作业调度等多种角度对网格计算与云计算进行了分析和研究。云计算所采用的商业理念、成熟的资源虚拟化技术以及非标准化的规范,使其体系结构、资源管理、作业调度等方面呈现出了不同的特点,也更适宜于为用户提供按需服务的目标,但在安全方面仍需不断完善。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号