首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cloud computing services delivery and consumption model is based on communication infrastructure (network). The network serves as a linkage between the end‐users consuming cloud services and the providers of data centers providing the cloud services. In addition, in large‐scale cloud data centers, tens of thousands of compute and storage nodes are connected by a data center network to deliver a single‐purpose cloud service. To this end, some questions could be raised, such as the following: How do network architectures affect cloud computing? How will network architecture evolve to support better cloud computing and cloud‐based service delivery? What is the network's role in reliability, performance, scalability, and security of cloud computing? Should the network be a dumb transport pipe or an intelligent stack that is cloud workload aware? This paper focuses on the networking aspect in cloud computing and shall provide insights to these questions. Researchers can use this paper to accelerate their research on devising mechanisms for the following: (i) provisioning cloud network as a service and (ii) engineering network of data centers. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Cloud computing and storage services allow clients to move their data center and applications to centralized large data centers and thus avoid the burden of local data storage and maintenance.However,this poses new challenges related to creating secure and reliable data storage over unreliable service providers.In this study,we address the problem of ensuring the integrity of data storage in cloud computing.In particular,we consider methods for reducing the burden of generating a constant amount of metadata at the client side.By exploiting some good attributes of the bilinear group,we can devise a simple and efficient audit service for public verification of untrusted and outsourced storage,which can be important for achieving widespread deployment of cloud computing.Whereas many prior studies on ensuring remote data integrity did not consider the burden of generating verification metadata at the client side,the objective of this study is to resolve this issue.Moreover,our scheme also supports data dynamics and public verifiability.Extensive security and performance analysis shows that the proposed scheme is highly efficient and provably secure.  相似文献   

3.
To implement a cloud game service platform supporting multiple users and devices based on real‐time streaming, there are many technical needs, including game screen and sound capturing, audio/video encoding in real time created by a high‐performance server‐generated game screen, and real‐time streaming to client devices, such as low‐cost PCs, smart devices, and set‐top boxes. We therefore present a game service platform for the running and management of the game screen, as well as running the sound on the server, in which the captured and encoded game screen and sound separately provide client devices through real‐time streaming. The proposed platform offers Web‐based services that allow game play on smaller end devices without requiring the games to be installed locally.  相似文献   

4.

Cloud computing is a global technology for data storage and retrieving. Many organizations are switching their companies to cloud technology, so that they can lease cloud services for use on a membership or pay as you go basis rather than creating their own systems. Cloud service provider and the Cloud service accessibility are the two major problems in cloud computing. The Economic Denial of Sustainability (EDoS) attack is an important attack towards the cloud service providers. The attackers may send continuous requests to the cloud in a particular second. Hence the legitimate user cannot access the data due to heavy cloud traffic. Hence the paid user cannot access the data. However, this problem makes an economical issue to the users. So this paper presented a new technique as, ADS-PAYG (Attack Defense Shell- Pay As You Go) approach using Trust Factor method against the EDoS attack is proposed to improve more number of authenticated users by fixing a threshold value. The algorithm produced an effective result based on response time, accuracy and CPU utilization. The ADS-PAYG solution is applied using MATLAB, which outperforms other Trust factor estimation methods and effectively distinguishes attackers from legitimate users. The detection accuracy is 83.43% for the given dataset and it is high when compared to the existing algorithms.

  相似文献   

5.
The mobile cloud computing (MCC) has enriched the quality of services that the clients access from remote cloud‐based servers. The growth in the number of wireless users for MCC has further augmented the requirement for a robust and efficient authenticated key agreement mechanism. Formerly, the users would access cloud services from various cloud‐based service providers and authenticate one another only after communicating with the trusted third party (TTP). This requirement for the clients to access the TTP during each mutual authentication session, in earlier schemes, contributes to the redundant latency overheads for the protocol. Recently, Tsai et al have presented a bilinear pairing based multi‐server authentication (MSA) protocol, to bypass the TTP, at least during mutual authentication. The scheme construction works fine, as far as the elimination of TTP involvement for authentication has been concerned. However, Tsai et al scheme has been found vulnerable to server spoofing attack and desynchronization attack, and lacks smart card‐based user verification, which renders the protocol inapt for practical implementation in different access networks. Hence, we have proposed an improved model designed with bilinear pairing operations, countering the identified threats as posed to Tsai scheme. Additionally, the proposed scheme is backed up by performance evaluation and formal security analysis.  相似文献   

6.
Cloud storage services require cost‐effective, scalable, and self‐managed secure data management functionality. Public cloud storage always enforces users to adopt the restricted generic security consideration provided by the cloud service provider. On the contrary, private cloud storage gives users the opportunity to configure a self‐managed and controlled authenticated data security model to control the accessing and sharing of data in a private cloud. However, this introduces several new challenges to data security. One critical issue is how to enable a secure, authenticated data storage model for data access with controlled data accessibility. In this paper, we propose an authenticated controlled data access and sharing scheme called ACDAS to address this issue. In our proposed scheme, we employ a biometric‐based authentication model for secure access to data storage and sharing. To provide flexible data sharing under the control of a data owner, we propose a variant of a proxy reencryption scheme where the cloud server uses a proxy reencryption key and the data owner generates a credential token during decryption to control the accessibility of the users. The security analysis shows that our proposed scheme is resistant to various attacks, including a stolen verifier attack, a replay attack, a password guessing attack, and a stolen mobile device attack. Further, our proposed scheme satisfies the considered security requirements of a data storage and sharing system. The experimental results demonstrate that ACDAS can achieve the security goals together with the practical efficiency of storage, computation, and communication compared with other related schemes.  相似文献   

7.
We predict performance metrics of cloud services using statistical learning, whereby the behaviour of a system is learned from observations. Specifically, we collect device and network statistics from a cloud testbed and apply regression methods to predict, in real‐time, client‐side service metrics for video streaming and key‐value store services. Results from intensive evaluation on our testbed indicate that our method accurately predicts service metrics in real time (mean absolute error below 16% for video frame rate and read latency, for instance). Further, our method is service agnostic in the sense that it takes as input operating systems and network statistics instead of service‐specific metrics. We show that feature set reduction significantly improves the prediction accuracy in our case, while simultaneously reducing model computation time. We find that the prediction accuracy decreases when, instead of a single service, both services run on the same testbed simultaneously or when the network quality on the path between the server cluster and the client deteriorates. Finally, we discuss the design and implementation of a real‐time analytics engine, which processes streams of device statistics and service metrics from testbed sensors and produces model predictions through online learning.  相似文献   

8.
云存储中,防止数据丢失的关键是实现文件容错。然而,云存储服务商可能没有提供承诺的容错水平,导致用户蒙受数据丢失和经济损失的双重风险。现有云存储数据容错存储方式检验方法存在服务器预读取欺骗攻击,并且效率低、实用性差,不能达到在一定概率范围内,快速、轻量级地检测出犯规的服务器行为的要求。针对上述问题,该文利用磁盘顺序存取和随机存取的差异性设计了一种远程数据容错存储方式检验方法 随机与顺序访问时间差异化(DRST)方法,其原理是文件块被分散地放在不同磁盘上,读取一个磁盘上顺序存储的文件块比随机读取不同磁盘上的文件块所需的响应时间短。最后,对所提方法进行了严格的理论证明和深入的性能分析,结果表明,所提方法能够快速检验出服务器是否为用户提供了其承诺的容错水平,并且比现有方案更安全,更高效。  相似文献   

9.
The massive growth of cloud computing has led to huge amounts of energy consumption and carbon emissions by a large number of servers. One of the major aspects of cloud computing is its scheduling of many task requests submitted by users. Minimizing energy consumption while ensuring the user's QoS preferences is very important to achieving profit maximization for the cloud service providers and ensuring the user's service level agreement (SLA). Therefore, in addition to implementing user's tasks, cloud data centers should meet the different criteria in applying the cloud resources by considering the multiple requirements of different users. Mapping of user requests to cloud resources for processing in a distributed environment is a well‐known NP‐hard problem. To resolve this problem, this paper proposes an energy‐efficient task‐scheduling algorithm based on best‐worst (BWM) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methodology. The main objective of this paper is to determine which cloud scheduling solution is more important to select. First, a decision‐making group identify the evaluation criteria. After that, a BWM process is applied to assign the importance weights for each criterion, because the selected criteria have varied importance. Then, TOPSIS uses these weighted criteria as inputs to evaluate and measure the performance of each alternative. The performance of the proposed and existing algorithms is evaluated using several benchmarks in the CloudSim toolkit and statistical testing through ANOVA, where the evaluation metrics include the makespan, energy consumption, and resource utilization.  相似文献   

10.
Cloud computing, a new paradigm in distributed computing, has gained wide popularity in a relatively short span of time. With the increase in the number, functionality and features of cloud services, it is more and more mind-boggling for the cloud users to find a trustworthy provider. Cloud users need to have confidence in cloud providers to migrate their critical data to cloud computing. There must be some means to determine reliability of service providers so that users can choose services with the assurance that the provider will not act malignantly. An effort has been made in this paper to formulate a hybrid model to calculate the trustworthiness of service providers. Cloud services are evaluated and trust value is calculated based on compliance and reputation. Service logs based compliance reflects dynamic trust. The reputation has been computed from collective user feedback. Feedback rating is the view of each user about the invoked services. The discovered services that fulfill the user requirements are ranked based on their trust values and top-k cloud services are recommended to the user. The proposed approach is efficient and considerably improves service-selection process in cloud applications.  相似文献   

11.
Cloud computing is considered the latest emerging computing paradigm and has brought revolutionary changes in computing technology. With the advancement in this field, the number of cloud users and service providers is increasing continuously with more diversified services. Consequently, the selection of appropriate cloud service has become a difficult task for a new cloud customer. In case of inappropriate selection of a cloud services, a cloud customer may face the vendor locked‐in issue and data portability and interoperability problems. These are the major obstacles in the adoption of cloud services. To avoid these complexities, a cloud customer needs to select an appropriate cloud service at the initial stage of the migration to the cloud. Many researches have been proposed to overcome the issues, but problems still exist in intercommunication standards among clouds and vendor locked‐in issues. This research proposed an IEEE multiagent Foundation for Intelligent Physical Agent (FIPA) compliance multiagent reference architecture for cloud discovery and selection using cloud ontology. The proposed approach will mitigate the prevailing vendor locked‐in issue and also alleviate the portability and interoperability problems in cloud computing. To evaluate the proposed reference architecture and compare it with the state‐of‐the‐art approaches, several experiments have been performed by utilizing the commonly used performance measures. Analysis indicates that the proposed approach enables significant improvements in cloud service discovery and selection in terms of search efficiency, execution, and response time.  相似文献   

12.
For security and efficiency problems in threshold based deduplication for cloud data,a novel method based on threshold re-encryption was proposed to deal with side channel attacks.A lightweight threshold re-encryption mechanism was presented to transfer the secondary encryption to the cloud for execution and allow clients to generate ciphertext based on key segmentation instead of ciphertext segmentation,both of which largely reduce computational overhead of clients.Also,the proposed mechanism enables clients to decrypt from both one-time encrypted and re-encrypted ciphertext,thus avoiding the overhead of redundant encryption of the same file.Mutual integrity verification between cloud service provider and clients was also supported by the proposed method,which directly ensured the correctness of the correspondence between ciphertext and plaintext on client side.Experiments show that the proposed method not only largely reduces the computational overhead on client side,but also achieves superior storage performance on cloud side simultaneously.  相似文献   

13.
Cloud service providers offer infrastructure, network services, and software applications in the cloud. The cloud services are hosted in a data center that can be used by users with the help of network connectivity. Hence, there is a need for providing security and integrity in cloud resources. Most security instruments have a finite rate of failure, and the intrusion comes with more complex and sophisticated techniques; the security failure rates are skyrocketing. In this paper, we have proposed a secure disintegration protocol (SDP) for the protection of privacy on-site and in the cloud. The architecture presented in this paper is used for cloud storage, and it is used in conjunction with our unique data compression and encoding technique. Probabilistic analysis is used for calculating the intrusion tolerance abilities for the SDP.  相似文献   

14.
越来越多的应用以云中网络服务的形式在服务提供商的控制下发布出来,但使用这些服务的用户却没有办法判断这些服务是否是可信的。文中通过一个可信管理框架来支持在云计算环境中可信服务的建立,让用户通过一个中立第三方得以获知服务程序的可信度,实现一个可信平台服务。最后在一个支持Python/Django框架的云平台上实现了一个原型系统,让服务提供商得以在封装服务程序实例的同时再向其外部用户证明Python代码的可信度。一旦运行,服务实例可以拥有独立标识并能防止用户篡改其代码。  相似文献   

15.
16.
Geographically distributed data centers are interconnected through provisioned dedicated WAN links, realized by circuit/wavelength–switching that support large‐scale data transfer between data centers. These dedicated WAN links are typically shared by multiple services through on‐demand and in‐advance resource reservations, resulting in varying bandwidth availability in future time periods. Such an inter‐data center network provides a dynamic and virtualized environment when augmented with cloud infrastructure supporting end‐host migration. In such an environment, dynamically provisioned network resources are recognized as extremely useful capabilities for many types of network services. However, the existing approaches to in‐advance reservation services provide limited reservation capabilities, eg, limited connections over links returned by the traceroute over traditional IP‐based networks. Moreover, most existing approaches do not address fault tolerance in the event of node or link failures and do not handle end‐host migrations; thus, they do not provide a reliability guarantee for in‐advance reservation frameworks. In this paper, we propose using multiple paths to increase bandwidth usage in the WAN links between data centers when a single path does not provide the requested bandwidth. Emulation‐based evaluations of the proposed path computation show a higher reservation acceptance rate compared to state‐of‐art reservation frameworks, and such computed paths can be configured with a limited number of static forwarding rules on switches. Our prototype provides the RESTful Web service interface for link‐fail and end‐host migration event management and reroutes paths for all the affected reservations.  相似文献   

17.
云存储技术及其应用   总被引:11,自引:0,他引:11  
云存储将大量不同类型的存储设备通过软件集合起来协同工作,共同对外提供数据存储服务。云存储服务对传统存储技术在数据安全性、可靠性、易管理性等方面提出新的挑战。文章基于云存储平台架构的4个层次:将多存储设备互连起来的数据存储层、为多服务提供公共支撑技术的数据管理层、支持多存储应用的数据服务层以及面向多用户的访问层展开研究,并以一种云存储典型应用——云备份(B-Cloud)为例,探讨云备份的软件架构、应用特点及研究要点。  相似文献   

18.
This letter presents a model for a dynamic collaboration (DC) platform among cloud providers (CPs) that prevents adverse business impacts, cloud vendor lock‐in and violation of service level agreements with consumers, and also offers collaborative cloud services to consumers. We consider two major challenges. The first challenge is to find an appropriate market model in order to enable the DC platform. The second is to select suitable collaborative partners to provide services. We propose a novel combinatorial auction‐based cloud market model that enables a DC platform among CPs. We also propose a new promising multi‐objective optimization model to quantitatively evaluate the partners. Simulation experiments were conducted to verify both of the proposed models.  相似文献   

19.
Smart grid systems are widely used across the world for providing demand response management between users and service providers. In most of the energy distributions scenarios, the traditional grid systems use the centralized architecture, which results in large transmission losses and high overheads during power generation. Moreover, owing to the presence of intruders or attackers, there may be a mismatch between demand and supply between utility centers (suppliers) and end users. Thus, there is a need for an automated energy exchange to provide secure and reliable energy trading between users and suppliers. We found, from the existing literature, that blockchain can be an effective solution to handle the aforementioned issues. Motivated by these facts, we propose a blockchain‐based smart energy trading scheme, ElectroBlocks, which provides efficient mechanisms for secure energy exchanges between users and service providers. In ElectroBlocks, nodes in the network validate the transaction using two algorithms that are cost aware and store aware. The cost‐aware algorithm locates the nearest node that can supply the energy, whereas the store‐aware algorithm ensures that the energy requests go to the node with the lowest storage space. We evaluated the performance of the ElectroBlocks using performance metrics such as mining delay, network exchanges, and storage energy. The simulation results obtained demonstrate that ElectroBlocks maintains a secure trade‐off between users and service providers when using the proposed cost‐aware and store‐aware algorithms.  相似文献   

20.
Cloud computing technologies have been prospering in recent years and have opened an avenue for a wide variety of forms of adaptable data sharing. Taking advantage of these state‐of‐the‐art innovations, the cloud storage data owner must, however, use a suitable identity‐based cryptographic mechanism to ensure the safety prerequisites while sharing data to large numbers of cloud data users with fuzzy identities. As a successful way to guarantee secure fuzzy sharing of cloud data, the identity‐based cryptographic technology still faces an effectiveness problem under multireceiver configurations. The chaos theory is considered a reasonable strategy for reducing computational complexity while meeting the cryptographic protocol's security needs. In an identity‐based cryptographic protocol, public keys for individual clients are distributed, allowing the clients to separately select their own network identities or names as their public keys. In fact, in a public‐key cryptographic protocol, it is for the best that the confirmation of the public key is done in a safe, private manner, because this way the load of storage on the server's side can be considerably relieved. The objective of this paper is to outline and examine a conversion process that can transfer cryptosystems using Chebyshev's chaotic maps over the Galois field to a subtree‐based protocol in the cloud computing setting for fuzzy user data sharing, as opposed to reconcocting a different structure. Furthermore, in the design of our conversion process, no adjustment of the original cryptosystem based on chaotic maps is needed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号