首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Traditional web-based e-learning system suffers from unstable workloads and security risks of incorporating external executable objects to servers. This paper addresses these issues with emerging technologies, as desktop grid and cloud computing. Learning users are motivated to be volunteers by hosting the virtual machines equipped with e-learning desktop grid applications. We develop components to integrate the e-learning system and desktop grid into the circumstance in which each user serves not only a task producer, but also a volunteer that solves tasks. In order to enhance the responsiveness between the passive desktop grid server and e-learning system, we have also developed asynchronous processes to enable the server and volunteer workers to cooperate in a tightly coupled manner. The system achieves the scalability by maintaining the ratio between the number of volunteers and the number of online users beyond certain threshold.  相似文献   

2.
The paper describes the results of the EU FP7 EDGI project concerning how to extend gLite VOs with public and institutional BOINC desktop grids. Beyond simply showing the integration architecture components and services, the main emphasis is on how this integrated architecture can efficiently support parameter study applications, based on the so-called metajob concept created by the EDGI project. The paper explains in detail how to use the metajob concept by gLite users to exploit the BOINC desktop grids connected to the gLite VO, as well as how metajobs are managed internally by the 3G Bridge service. Performance measurements show that the Metajob concept indeed can significantly improve the performance of gLite VOs extended with desktop grids. Finally, the paper describes the practical ways of connecting BOINC desktop grids to gLite VOs and the accounting mechanism in these integrated grid systems.  相似文献   

3.
Volunteer computing systems offer high computing power to the scientific communities to run large data intensive scientific workflows. However, these computing environments provide the best effort infrastructure to execute high performance jobs. This work aims to schedule scientific and data intensive workflows on hybrid of the volunteer computing system and Cloud resources to enhance the utilization of these environments and increase the percentage of workflow that meets the deadline. The proposed workflow scheduling system partitions a workflow into sub-workflows to minimize data dependencies among the sub-workflows. Then these sub-workflows are scheduled to distribute on volunteer resources according to the proximity of resources and the load balancing policy. The execution time of each sub-workflow on the selected volunteer resources is estimated in this phase. If any of the sub-workflows misses the sub-deadline due to the large waiting time, we consider re-scheduling of this sub-workflow into the public Cloud resources. This re-scheduling improves the system performance by increasing the percentage of workflows that meet the deadline. The proposed Cloud-aware data intensive scheduling algorithm increases the percentage of workflow that meet the deadline with a factor of 75% in average with respect to the execution of workflows on the volunteer resources.  相似文献   

4.
From cloud computing to cloud manufacturing   总被引:17,自引:0,他引:17  
Cloud computing is changing the way industries and enterprises do their businesses in that dynamically scalable and virtualized resources are provided as a service over the Internet. This model creates a brand new opportunity for enterprises. In this paper, some of the essential features of cloud computing are briefly discussed with regard to the end-users, enterprises that use the cloud as a platform, and cloud providers themselves. Cloud computing is emerging as one of the major enablers for the manufacturing industry; it can transform the traditional manufacturing business model, help it to align product innovation with business strategy, and create intelligent factory networks that encourage effective collaboration. Two types of cloud computing adoptions in the manufacturing sector have been suggested, manufacturing with direct adoption of cloud computing technologies and cloud manufacturing—the manufacturing version of cloud computing. Cloud computing has been in some of key areas of manufacturing such as IT, pay-as-you-go business models, production scaling up and down per demand, and flexibility in deploying and customizing solutions. In cloud manufacturing, distributed resources are encapsulated into cloud services and managed in a centralized way. Clients can use cloud services according to their requirements. Cloud users can request services ranging from product design, manufacturing, testing, management, and all other stages of a product life cycle.  相似文献   

5.
智能电网符合当前需求,意义重大。首先简单介绍了云计算和智能电网,并对云计算在智能电网中的应用做了阐述,然后主要对云计算的安全技术进行了分析。  相似文献   

6.
随着云计算的快速的发展,越来越多的用户将应用和数据托管到了云端。但是由于云计算具有的虚拟化、多用户、可伸缩等新特性,使得传统的安全技术并不能保证云计算的安全,云计算中的安全问题成了阻碍云计算发展的最主要的问题之一。因此,云计算的安全性成为了当前研究云计算的重点。  相似文献   

7.
Many modern computing platforms—notably clouds and desktop grids—exhibit dynamic heterogeneity: the availability and computing power of their constituent resources can change unexpectedly and dynamically, even in the midst of a computation. We introduce a new quality metric, area, for schedules that execute computations having interdependent constituent chores (jobs, tasks, etc.) on such platforms. Area measures the average number of tasks that a schedule renders eligible for execution at each step of a computation. Even though the definition of area does not mention and properties of host platforms (such as volatility), intuition suggests that rendering tasks eligible at a faster rate will have a benign impact on the performance of volatile platforms—and we report on simulation experiments that support this intuition. We derive the basic properties of the area metric and show how to efficiently craft area-maximizing (A-M) schedules for several classes of significant computations. Simulations that compare A-M scheduling against heuristics ranging from lightweight ones (e.g., FIFO) to computationally intensive ones suggest that A-M schedules complete computations on volatile heterogeneous platforms faster than their competition, by percentages that vary with computation structure and platform behavior—but are often in the double digits.  相似文献   

8.
The volunteer computing paradigm, along with the tailored use of peer-to-peer communication, has recently proven capable of solving a wide area of data-intensive problems in a distributed scenario. The Mining@Home framework is based on these paradigms and it has been implemented to run a wide range of distributed data mining applications. The efficiency and scalability of the architecture can be fully exploited when the overall task can be partitioned into distinct jobs that may be executed in parallel, and input data can be reused, which naturally leads to the use of data cachers. This paper explores the opportunities offered by Mining@Home for coping with the discovery of classifiers through the use of the bagging approach: multiple learners are used to compute models from the same input data, so as to extract a final model with high statistical accuracy. Analysis focuses on the evaluation of experiments performed in a real distributed environment, enriched with simulation assessment–to evaluate very large environments–and with an analytical investigation based on the iso-efficiency methodology. An extensive set of experiments allowed to analyze a number of heterogeneous scenarios, with different problem sizes, which helps to improve the performance by appropriately tuning the number of workers and the number of interconnected domains.  相似文献   

9.
10.
The Quake-Catcher Network (QCN) represents a paradigm shift in seismic networks by involving the general public in the collection, detection, and recognition of seismic events. The QCN uses low-cost sensors connected to volunteer computers across the world to monitor seismic events. The location and density of these sensors can impact the accuracy of event detection. Testing different arrangements of new sensors could disrupt the currently active project; thus such an experiment is best accomplished in a simulated environment.This paper presents an accurate and efficient framework for simulating low-cost QCN sensors and identifying their most effective locations and densities. To build the framework, we extend EmBOINC, an emulator of Berkeley Open Infrastructure for Network Computing (BOINC) projects, to handle the trickle messages generated by sensors connected to volunteer hosts and sent to the QCN server when strong ground motion is detected. EmBOINC allows us to rigorously study QCN simulations at 100,000 or even 1,000,000 sensors, highlight strengths and weaknesses of different sensor density and placement, and test the network with various parameters, conditions, and earthquake scenarios. Results obtained with EmBOINC and presented in this paper show how our simulations can reliably study diverse sensor densities and seismic scenarios under different geographical and infrastructural constraints.  相似文献   

11.
通过借鉴OCLC在云计算图书馆的设计思路以及基于云计算整合图书馆资源与日常服务的办法,提出了在云计算图书馆云服务的构建思路及其模型。  相似文献   

12.
Traditionally, heavy computational tasks were performed on a dedicated infrastructure requiring a heavy initial investment, such as a supercomputer or a data center. Grid computing relaxed the assumptions of the fixed infrastructure, allowing the sharing of remote computational resources. Cloud computing brought these ideas into the commercial realm and allows users to request on demand an essentially unlimited amount of computing power. However, in contrast to previous assumptions, this computing power is metered and billed on an hour-by-hour basis.In this paper, we are considering applications where the output quality increases with the deployed computational power, a large class including applications ranging from weather prediction to financial modeling. We are proposing a computation scheduling that considers both the financial cost of the computation and the predicted financial benefit of the output, that is, its value of information (VoI). We model the proposed approach for an example of analyzing real-estate investment opportunities in a competitive environment. We show that by using the VoI-based scheduling algorithm, we can outperform minimalistic computing approaches, large but fixedly allocated data centers and cloud computing approaches that do not consider the VoI.  相似文献   

13.
Scheduling is essentially a decision-making process that enables resource sharing among a number of activities by determining their execution order on the set of available resources. The emergence of distributed systems brought new challenges on scheduling in computer systems, including clusters, grids, and more recently clouds. On the other hand, the plethora of research makes it hard for both newcomers researchers to understand the relationship among different scheduling problems and strategies proposed in the literature, which hampers the identification of new and relevant research avenues. In this paper we introduce a classification of the scheduling problem in distributed systems by presenting a taxonomy that incorporates recent developments, especially those in cloud computing. We review the scheduling literature to corroborate the taxonomy and analyze the interest in different branches of the proposed taxonomy. Finally, we identify relevant future directions in scheduling for distributed systems.  相似文献   

14.
Cloud computing can reduce power consumption by using virtualized computational resources to provision an application’s computational resources on demand. Auto-scaling is an important cloud computing technique that dynamically allocates computational resources to applications to match their current loads precisely, thereby removing resources that would otherwise remain idle and waste power. This paper presents a model-driven engineering approach to optimizing the configuration, energy consumption, and operating cost of cloud auto-scaling infrastructure to create greener computing environments that reduce emissions resulting from superfluous idle resources. The paper provides four contributions to the study of model-driven configuration of cloud auto-scaling infrastructure by (1) explaining how virtual machine configurations can be captured in feature models, (2) describing how these models can be transformed into constraint satisfaction problems (CSPs) for configuration and energy consumption optimization, (3) showing how optimal auto-scaling configurations can be derived from these CSPs with a constraint solver, and (4) presenting a case study showing the energy consumption/cost reduction produced by this model-driven approach.  相似文献   

15.
There is a growing interest around the utilisation of cloud computing in education. As organisations involved in the area typically face severe budget restrictions, there is a need for cost optimisation mechanisms that explore unique features of digital learning environments. In this work, we introduce a method based on Maximum Likelihood Estimation that considers heterogeneity of IT infrastructure in order to devise resource allocation plans that maximise platform utilisation for educational environments. We performed experiments using modelled datasets from real digital teaching solutions and obtained cost reductions of up to 30%, compared with conservative resource allocation strategies.  相似文献   

16.
本文提出了一种云环境下的网络安全处理模型,模型中的每台云服务器都拥有自己的入侵检测系统,并且所有的服务器共享一个异常管理平台,该平台负责报警信息的接收、处理和日志管理.模型采用报警级别动态调整技术和攻击信息共享方法,最大限度地降低了漏报率和服务器遭受同种攻击的可能性,有效提高了检测效率和系统安全水平.  相似文献   

17.
云计算技术在近几年快速兴起,并且快速应用到图书馆系统中来,但随之带来的还有安全问题.本文就云时代的数字图书馆所面临的安全问题进行了阐述,并且就如何解决这些安全问题进行了细致的研究.  相似文献   

18.
Deep learning has attracted a lot of attention and has been applied successfully in many areas such as bioinformatics, imaging processing, game playing and computer security etc. On the other hand, deep learning usually requires a lot of training data which may not be provided by a sole owner. As the volume of data gets huge, it is common for users to store their data in a third-party cloud. Due to the confidentiality of the data, data are usually stored in encrypted form. To apply deep learning to these datasets owned by multiple data owners on cloud, we need to tackle two challenges: (i) the data are encrypted with different keys, all operations including intermediate results must be secure; and (ii) the computational cost and the communication cost of the data owner(s) should be kept minimal. In our work, we propose two schemes to solve the above problems. We first present a basic scheme based on multi-key fully homomorphic encryption (MK-FHE), then we propose an advanced scheme based on a hybrid structure by combining the double decryption mechanism and fully homomorphic encryption (FHE). We also prove that these two multi-key privacy-preserving deep learning schemes over encrypted data are secure.  相似文献   

19.
Interest is growing in open source tools that let organizations build IaaS clouds using their own internal infrastructures, alone or in conjunction with external ones. A key component in such private/hybrid clouds is virtual infrastructure management, i.e., the dynamic orchestration of virtual machines, based on the understanding and prediction of performance at scale, with uncertain workloads and frequent node failures. Part of the research community is trying to solve this and other IaaS problems looking to Autonomic Computing techniques, that can provide, for example, better management of energy consumption, quality of service (QoS), and unpredictable system behaviors. In this context, we first recall the main features of the NAM framework devoted to the design of distributed autonomic systems. Then we illustrate the organization and policies of a NAM-based Workload Manager, focusing on one of its components, the Capacity Planner. We show that, when it is not possible to obtain optimal energy-aware plans analytically, sub-optimal plans can be autonomically obtained using online discrete event simulation. Specifically, the proposed approach allows to cope with a broader range of working conditions and types of workloads.  相似文献   

20.
随着信息化在企业中的深入运用,传统IT架构慢慢显现出弊端:基础资源利用率偏低、管理上太分散;面对管理或业务出现变化时,旧的业务系统难以快速响应.怎样实现业务同IT建设的协同,怎样对IT系统现有数据进行利用,这些都成为当下企业立刻要解决的.“云计算”作为眼下信息领域炙手可热的技术,一方面它能对现有IT资源进行很好地整合,另一方面面对业务变化时它能实现IT建设的迅速响应,因此它的研发为解决企业的上述难题提供了希望.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号