首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
MapReduce is regarded as an adequate programming model for large-scale data-intensive applications. The Hadoop framework is a well-known MapReduce implementation that runs the MapReduce tasks on a cluster system. G-Hadoop is an extension of the Hadoop MapReduce framework with the functionality of allowing the MapReduce tasks to run on multiple clusters. However, G-Hadoop simply reuses the user authentication and job submission mechanism of Hadoop, which is designed for a single cluster. This work proposes a new security model for G-Hadoop. The security model is based on several security solutions such as public key cryptography and the SSL protocol, and is dedicatedly designed for distributed environments. This security framework simplifies the users authentication and job submission process of the current G-Hadoop implementation with a single-sign-on approach. In addition, the designed security framework provides a number of different security mechanisms to protect the G-Hadoop system from traditional attacks.  相似文献   

2.
Multimedia communication research and development often requires computationally intensive simulations in order to develop and investigate the performance of new optimization algorithms. Depending on the simulations, they may require even a few days to test an adequate set of conditions due to the complexity of the algorithms. The traditional approach to speed up this type of relatively small simulations, which require several develop–simulate–reconfigure cycles, is indeed to run them in parallel on a few computers and leaving them idle when developing the technique for the next simulation cycle. This work proposes a new cost-effective framework based on cloud computing for accelerating the development process, in which resources are obtained on demand and paid only for their actual usage. Issues are addressed both analytically and practically running actual test cases, i.e., simulations of video communications on a packet lossy network, using a commercial cloud computing service. A software framework has also been developed to simplify the management of the virtual machines in the cloud. Results show that it is economically convenient to use the considered cloud computing service, especially in terms of reduced development time and costs, with respect to a solution using dedicated computers, when the development time is longer than one hour. If more development time is needed between simulations, the economic advantage progressively reduces as the computational complexity of the simulation increases.  相似文献   

3.
From microarrays and next generation sequencing to clinical records, the amount of biomedical data is growing at an exponential rate. Handling and analyzing these large amounts of data demands that computing power and methodologies keep pace. The goal of this paper is to illustrate how high performance computing methods in SAS can be easily implemented without the need of extensive computer programming knowledge or access to supercomputing clusters to help address the challenges posed by large biomedical datasets. We illustrate the utility of database connectivity, pipeline parallelism, multi-core parallel process and distributed processing across multiple machines. Simulation results are presented for parallel and distributed processing. Finally, a discussion of the costs and benefits of such methods compared to traditional HPC supercomputing clusters is given.  相似文献   

4.
ContextCloud computing is a thriving paradigm that supports an efficient way to provide IT services by introducing on-demand services and flexible computing resources. However, significant adoption of cloud services is being hindered by security issues that are inherent to this new paradigm. In previous work, we have proposed ISGcloud, a security governance framework to tackle cloud security matters in a comprehensive manner whilst being aligned with an enterprise’s strategy.ObjectiveAlthough a significant body of literature has started to build up related to security aspects of cloud computing, the literature fails to report on evidence and real applications of security governance frameworks designed for cloud computing environments. This paper introduces a detailed application of ISGCloud into a real life case study of a Spanish public organisation, which utilises a cloud storage service in a critical security deployment.MethodThe empirical evaluation has followed a formal process, which includes the definition of research questions previously to the framework’s application. We describe ISGcloud process and attempt to answer these questions gathering results through direct observation and from interviews with related personnel.ResultsThe novelty of the paper is twofold: on the one hand, it presents one of the first applications, in the literature, of a cloud security governance framework to a real-life case study along with an empirical evaluation of the framework that proves its validity; on the other hand, it demonstrates the usefulness of the framework and its impact to the organisation.ConclusionAs discussed on the paper, the application of ISGCloud has resulted in the organisation in question achieving its security governance objectives, minimising the security risks of its storage service and increasing security awareness among its users.  相似文献   

5.
Due to energy crisis of the last years, energy waste and sustainability have been brought both into public attention, and under industry and scientific scrutiny. Thus, obtaining high-performance at a reduced cost in cloud environments as reached a turning point where computing power is no longer the most important concern. However, the emphasis is shifting to manage energy efficiently, whereas providing techniques for measuring energy requirements in cloud systems becomes of capital importance.Currently there are different methods for measuring energy consumption in computer systems. The first consists in using power meter devices, which measure the aggregated power use of a machine. Another method involves directly instrumenting the motherboard with multimeters in order to obtain each power connector’s voltage and current, thus obtaining real-time power consumption. These techniques provide a very accurate results, but they are not suitable for large-scale environments. On the contrary, simulation techniques provide good scalability for performing experiments of energy consumption in cloud environments. In this paper we propose E-mc2, a formal framework integrated into the iCanCloud simulation platform for modelling the energy requirements in cloud computing systems.  相似文献   

6.
In cloud computing environments in software as a service (SaaS) level, interoperability refers to the ability of SaaS systems on one cloud provider to communicate with SaaS systems on another cloud provider. One of the most important barriers to the adoption of SaaS systems in cloud computing environments is interoperability. A common tactic for enabling interoperability is the use of an interoperability framework or model. During the past few years, in cloud SaaS level, various interoperability frameworks and models have been developed to provide interoperability between systems. The syntactic interoperability of SaaS systems have already been intensively researched. However, not enough consideration has been given to semantic interoperability issues. Achieving semantic interoperability is a challenge within the world of SaaS in cloud computing environments. Therefore, a semantic interoperability framework for SaaS systems in cloud computing environments is needed. We develop a semantic interoperability framework for cloud SaaS systems. The capabilities and value of service oriented architecture for semantic interoperability within cloud SaaS systems have been studied and demonstrated. This paper is accomplished through a number of steps (research methodology). It begins with a study on related works in the literature. Then, problem statement and research objectives are explained. In the next step, semantic interoperability requirements for SaaS systems in cloud computing environments that are needed to support are analyzed. The details of the proposed semantic interoperability framework for SaaS systems in cloud computing environments are presented. It includes the design of the proposed semantic interoperability framework. Finally, the evaluation methods of the semantic interoperability framework are elaborated. In order to evaluate the effectiveness of the proposed semantic interoperability framework for SaaS systems in cloud computing environments, extensive experimentation and statistical analysis have been performed. The experiments and statistical analysis specify that the proposed semantic interoperability framework for cloud SaaS systems is able to establish semantic interoperability between cloud SaaS systems in a more efficient way. It is concluded that using the proposed framework, there is a significant improvement in the effectiveness of semantic interoperability of SaaS systems in cloud computing environments.  相似文献   

7.
Secure provenance that records the ownership and process history of data objects is vital to the success of data forensics in cloud computing. In this paper, we propose a new secure provenance scheme based on group signature and attribute-based signature techniques. The proposed provenance scheme provides confidentiality on sensitive documents stored in a cloud, unforgeability of the provenance record, anonymous authentication to cloud servers, fine-grained access control on documents, and provenance tracking on disputed documents. Furthermore, it is assumed that the cloud server has huge computation capacity, while users are regarded as devices with low computation capability. Aiming at this, we show how to utilize the cloud server to outsource and decrease the user’s computational overhead during the process of provenance. With provable security techniques, we formally demonstrate the security of the proposed scheme under standard assumptions.  相似文献   

8.
Interest is growing in open source tools that let organizations build IaaS clouds using their own internal infrastructures, alone or in conjunction with external ones. A key component in such private/hybrid clouds is virtual infrastructure management, i.e., the dynamic orchestration of virtual machines, based on the understanding and prediction of performance at scale, with uncertain workloads and frequent node failures. Part of the research community is trying to solve this and other IaaS problems looking to Autonomic Computing techniques, that can provide, for example, better management of energy consumption, quality of service (QoS), and unpredictable system behaviors. In this context, we first recall the main features of the NAM framework devoted to the design of distributed autonomic systems. Then we illustrate the organization and policies of a NAM-based Workload Manager, focusing on one of its components, the Capacity Planner. We show that, when it is not possible to obtain optimal energy-aware plans analytically, sub-optimal plans can be autonomically obtained using online discrete event simulation. Specifically, the proposed approach allows to cope with a broader range of working conditions and types of workloads.  相似文献   

9.
The paper presents a platform for distributed computing, developed using the latest software technologies and computing paradigms to enable big data mining. The platform, called ClowdFlows, is implemented as a cloud-based web application with a graphical user interface which supports the construction and execution of data mining workflows, including web services used as workflow components. As a web application, the ClowdFlows platform poses no software requirements and can be used from any modern browser, including mobile devices. The constructed workflows can be declared either as private or public, which enables sharing the developed solutions, data and results on the web and in scientific publications. The server-side software of ClowdFlows can be multiplied and distributed to any number of computing nodes. From a developer’s perspective the platform is easy to extend and supports distributed development with packages. The paper focuses on big data processing in the batch and real-time processing mode. Big data analytics is provided through several algorithms, including novel ensemble techniques, implemented using the map-reduce paradigm and a special stream mining module for continuous parallel workflow execution. The batch mode and real-time processing mode are demonstrated with practical use cases. Performance analysis shows the benefit of using all available data for learning in distributed mode compared to using only subsets of data in non-distributed mode. The ability of ClowdFlows to handle big data sets and its nearly perfect linear speedup is demonstrated.  相似文献   

10.
Multicore computational accelerators such as GPUs are now commodity components for high-performance computing at scale. While such accelerators have been studied in some detail as stand-alone computational engines, their integration in large-scale distributed systems raises new challenges and trade-offs. In this paper, we present an exploration of resource management alternatives for building asymmetric accelerator-based distributed systems. We present these alternatives in the context of a capabilities-aware framework for data-intensive computing, which uses an enhanced implementation of the MapReduce programming model for accelerator-based clusters, compared to the state of the art. The framework can transparently utilize heterogeneous accelerators for deriving high performance with low programming effort. Our work is the first to compare heterogeneous types of accelerators, GPUs and a Cell processors, in the same environment and the first to explore the trade-offs between compute-efficient and control-efficient accelerators on data-intensive systems. Our investigation shows that our framework scales well with the number of different compute nodes. Furthermore, it runs simultaneously on two different types of accelerators, successfully adapts to the resource capabilities, and performs 26.9% better on average than a static execution approach.  相似文献   

11.
Advances in sensor technology, personal mobile devices, wireless broadband communications, and Cloud computing are enabling real-time collection and dissemination of personal health data to patients and health-care professionals anytime and from anywhere. Personal mobile devices, such as PDAs and mobile phones, are becoming more powerful in terms of processing capabilities and information management and play a major role in peoples daily lives. This technological advancement has led us to design a real-time health monitoring and analysis system that is Scalable and Economical for people who require frequent monitoring of their health. In this paper, we focus on the design aspects of an autonomic Cloud environment that collects peoples health data and disseminates them to a Cloud-based information repository and facilitates analysis on the data using software services hosted in the Cloud. To evaluate the software design we have developed a prototype system that we use as an experimental testbed on a specific use case, namely, the collection of electrocardiogram (ECG) data obtained at real-time from volunteers to perform basic ECG beat analysis.  相似文献   

12.
The Monte Carlo (MC) method is the most common technique used for uncertainty quantification, due to its simplicity and good statistical results. However, its computational cost is extremely high, and, in many cases, prohibitive. Fortunately, the MC algorithm is easily parallelizable, which allows its use in simulations where the computation of a single realization is very costly. This work presents a methodology for the parallelization of the MC method, in the context of cloud computing. This strategy is based on the MapReduce paradigm, and allows an efficient distribution of tasks in the cloud. This methodology is illustrated on a problem of structural dynamics that is subject to uncertainties. The results show that the technique is capable of producing good results concerning statistical moments of low order. It is shown that even a simple problem may require many realizations for convergence of histograms, which makes the cloud computing strategy very attractive (due to its high scalability capacity and low-cost). Additionally, the results regarding the time of processing and storage space usage allow one to qualify this new methodology as a solution for simulations that require a number of MC realizations beyond the standard.  相似文献   

13.
There is a growing interest around the utilisation of cloud computing in education. As organisations involved in the area typically face severe budget restrictions, there is a need for cost optimisation mechanisms that explore unique features of digital learning environments. In this work, we introduce a method based on Maximum Likelihood Estimation that considers heterogeneity of IT infrastructure in order to devise resource allocation plans that maximise platform utilisation for educational environments. We performed experiments using modelled datasets from real digital teaching solutions and obtained cost reductions of up to 30%, compared with conservative resource allocation strategies.  相似文献   

14.
本文提出了一种云环境下的网络安全处理模型,模型中的每台云服务器都拥有自己的入侵检测系统,并且所有的服务器共享一个异常管理平台,该平台负责报警信息的接收、处理和日志管理.模型采用报警级别动态调整技术和攻击信息共享方法,最大限度地降低了漏报率和服务器遭受同种攻击的可能性,有效提高了检测效率和系统安全水平.  相似文献   

15.
Abstract

Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user’s applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.  相似文献   

16.
ABSTRACT

Data security is a primary concern for the enterprise moving data to cloud. This study attempts to match the data of different values with the different security management strategies from the perspective of the enterprise user. With the help of core ideas on data value evaluation in information lifecycle management, this study extracts usage features and user features from the operating data of the enterprise information system, and applies K-means to cluster the data according to its value. A total of 39,348 records of logon log and 120 records of users from the information system of a ship-fitting manufacturer in China were collected for an empirical study. The functional modules of the manufacturer’s information system are divided into five classes according to their value, which is proven reasonable by the discriminant function obtained via discriminant analysis. The differentiated data security management strategies on cloud computing are formulated for a case study with five types of data to enhance the enterprise’s active cloud computing data security defense.  相似文献   

17.
Cloud computing infrastructure is a promising new technology and greatly accelerates the development of large scale data storage, processing and distribution. However, security and privacy become major concerns when data owners outsource their private data onto public cloud servers that are not within their trusted management domains. To avoid information leakage, sensitive data have to be encrypted before uploading onto the cloud servers, which makes it a big challenge to support efficient keyword-based queries and rank the matching results on the encrypted data. Most current works only consider single keyword queries without appropriate ranking schemes. In the current multi-keyword ranked search approach, the keyword dictionary is static and cannot be extended easily when the number of keywords increases. Furthermore, it does not take the user behavior and keyword access frequency into account. For the query matching result which contains a large number of documents, the out-of-order ranking problem may occur. This makes it hard for the data consumer to find the subset that is most likely satisfying its requirements. In this paper, we propose a flexible multi-keyword query scheme, called MKQE to address the aforementioned drawbacks. MKQE greatly reduces the maintenance overhead during the keyword dictionary expansion. It takes keyword weights and user access history into consideration when generating the query result. Therefore, the documents that have higher access frequencies and that match closer to the users’ access history get higher rankings in the matching result set. Our experiments show that MKQE presents superior performance over the current solutions.  相似文献   

18.
The concept of cloud computing has emerged as the next generation of computing infrastructure to reduce the costs associated with the management of hardware and software resources. It is vital to its success that cloud computing is featured efficient, flexible and secure characteristics. In this paper, we propose an efficient and anonymous data sharing protocol with flexible sharing style, named EFADS, for outsourcing data onto the cloud. Through formal security analysis, we demonstrate that EFADS provides data confidentiality and data sharer's anonymity without requiring any fully-trusted party. From experimental results, we show that EFADS is more efficient than existing competing approaches. Furthermore, the proxy re-encryption scheme we propose in this paper may be independent of interests, i.e., compared to those previously reported proxy re-encryption schemes, the proposed scheme is the first pairing-free, anonymous and unidirectional proxy re-encryption scheme in the standard model.  相似文献   

19.
20.
With the increasing trend of outsourcing data to the cloud for efficient data storage, secure data collaboration service including data read and write in cloud computing is urgently required. However, it introduces many new challenges toward data security. The key issue is how to afford secure write operation on ciphertext collaboratively, and the other issues include difficulty in key management and heavy computation overhead on user since cooperative users may read and write data using any device. In this paper, we propose a secure and efficient data collaboration scheme, in which fine-grained access control of ciphertext and secure data writing operation can be afforded based on attribute-based encryption (ABE) and attribute-based signature (ABS) respectively. In order to relieve the attribute authority from heavy key management burden, our scheme employs a full delegation mechanism based on hierarchical attribute-based encryption (HABE). Further, we also propose a partial decryption and signing construction by delegating most of the computation overhead on user to cloud service provider. The security and performance analysis show that our scheme is secure and efficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号