首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Cloud computing and storage services allow clients to move their data center and applications to centralized large data centers and thus avoid the burden of local data storage and maintenance. However, this poses new challenges related to creating secure and reliable data storage over unreliable service providers. In this study, we address the problem of ensuring the integrity of data storage in cloud computing. In particular, we consider methods for reducing the burden of generating a constant amount of metadata at the client side. By exploiting some good attributes of the bilinear group, we can devise a simple and efficient audit service for public verification of untrusted and outsourced storage, which can be important for achieving widespread deployment of cloud computing. Whereas many prior studies on ensuring remote data integrity did not consider the burden of generating verification metadata at the client side, the objective of this study is to resolve this issue. Moreover, our scheme also supports data dynamics and public verifiability. Extensive security and performance analysis shows that the proposed scheme is highly efficient and provably secure.  相似文献   

2.
Cloud storage can provide flexible and scal- able data storage services to users. However, once data is uploaded to the cloud without a copy in local computers, the user loses control of the data physically. So, it is nec- essary to study a method to ensure users' data integrity. Avoiding retrieving enormous storage data or checking the data by users, a proof of storage protocol with public audit- ing was proposed based on the lattice cryptography. The user computed the signatures of the blocks, and outsourced them to cloud servers. Cloud service providers combined the blocks. Third party auditor verified all blocks' integrity only through the combined message and signature. Based on the Small integer solution assumption, the presented protocol is secure against the lost attack and tamper attack from cloud service providers. Based on the Learning with error assumption, the presented protocol is secure against the curiosity attack from third party auditor. The pro- tocol is quite eftlcient, requiring just a few matrix-vector multiplications and samplings from discrete Gaussians.  相似文献   

3.
As an emerging technology, cloud-assisted wireless body area networks(WBANs) provide more convenient services to users. Recently, many remote data auditing protocols have been proposed to ensure the data integrity and authenticity when data owners outsourced their data to the cloud. However, most of them cannot check data integrity periodically according to the pay-as-you-go business model. These protocols also need high tag generation computation overhead, which brings a heavy burden for data o...  相似文献   

4.
With the development of Internet of things (IoT), more and more intelligent terminal devices outsource data to cloud servers (CSs). However, the CS is not fully trusted, and the heterogeneity among different domains makes it difficult for third-party auditor (TPA) to conduct an efficient integrity auditing of outsourced data. Therefore, the cross-domain data cloud storage auditing scheme based on certificateless cryptography is proposed, which can effectively avoid the big burden of certificate management or key escrow problems in identity-based cryptography. At the same time, TPA can effectively audit the integrity of outsourced data in different domains. Formal security proof and analysis show that the cloud storage auditing scheme satisfies the security and privacy requirements. Performance analysis demonstrates that the efficiency is acceptable.  相似文献   

5.
Many schemes have been present to tackle data integrity and retrievability in cloud storage. Few of existing schemes support data dynamics, public verifica- tion and protect data privacy simultaneously. We propose a public auditing scheme which enables privacy-preserving, data dynamics and batch auditing. A data updating infor- mation table is designed to record the status information of the data blocks and facilitate data dynamics. Homomor- phic authenticator and random masking technologies are exploited to protect data privacy for data owners. The scheme employs a Trusted third party auditor (TTPA) to verify the data integrity without learning any informa- tion about the data content during the auditing process. The scheme also allows batch auditing so that TTPA can process multiple auditing requests simultaneously which greatly accelerates the auditing process. Security and per- formance analysis show that our scheme is secure and fea- sible.  相似文献   

6.
With the increasing popularity of cloud computing,privacy has become one of the key problem in cloud security.When data is outsourced to the cloud,for data owners,they need to ensure the security of their privacy;for cloud service providers,they need some information of the data to provide high QoS services;and for authorized users,they need to access to the true value of data.The existing privacy-preserving methods can't meet all the needs of the three parties at the same time.To address this issue,we propose a retrievable data perturbation method and use it in the privacy-preserving in data outsourcing in cloud computing.Our scheme comes in four steps.Firstly,an improved random generator is proposed to generate an accurate "noise".Next,a perturbation algorithm is introduced to add noise to the original data.By doing this,the privacy information is hidden,but the mean and covariance of data which the service providers may need remain unchanged.Then,a retrieval algorithm is proposed to get the original data back from the perturbed data.Finally,we combine the retrievable perturbation with the access control process to ensure only the authorized users can retrieve the original data.The experiments show that our scheme perturbs date correctly,efficiently,and securely.  相似文献   

7.
Despite that existing data sharing systems in online social networks (OSNs) propose to encrypt data before sharing, the multiparty access control of encrypted data has become a challenging issue. In this paper, we propose a secure data sharing scheme in 0SNs based on ciphertext-policy attribute- based proxy re-encryption and secret sharing. In order to protect users' sensitive data, our scheme allows users to customize access policies of their data and then outsource encrypted data to the OSNs service provider. Our scheme presents a multiparty access control model, which enables the disseminator to update the access policy of ciphertext if their attributes satisfy the existing access policy. Further, we present a partial decryption construction in which the computation overhead of user is largely reduced by delegating most of the decryption operations to the OSNs service provider. We also provide checkability on the results returned from the OSNs service provider to guarantee the correctness of partial decrypted ciphertext. Moreover, our scheme presents an efficient attribute revocation method that achieves both forward and backward secrecy. The security and performance analysis results indicate that the proposed scheme is secure and efficient in OSNs.  相似文献   

8.
Cloud computing is a developing computing paradigm in which resources of the computing infrastructure are provided as services over the network. Hopeful as it is, this paradigm also brings new challenges for data security and encryption storage when date owner stores sensitive data for sharing with untrusted cloud servers. When it comes to fine-grained data and scalable access control, a huge computation for key distribution and data management is required. In this article, we achieved this goal by exploiting and uniquely combining techniques of ciphertext-policy attribute-based encryption(CP-ABE), linear secret sharing schemes(LSSS), and counter(CTR) mode encryption. The proposed scheme is highly efficient by conducting the revocation on attribute level rather than on user level. The goals of data confidentiality and no collusion attack(even the cloud servers(CS) collude with users), as well as ones of fine-grainedness and scalability, are also achieved in our access structure.  相似文献   

9.
Due to the use of the cloud computing technology, the ownership is separated from the adminis-tration of the data in cloud and the shared data might be migrated between different clouds, which would bring new challenges to data secure creation, especially for the data privacy protection. We propose a User-centric data secure creation scheme (UCDSC) for the security requirements of resource owners in cloud. In this scheme, a data owner first divides the users into different domains. The data owner encrypts data and defines different secure managing poli-cies for the data according to domains. To encrypt the data in UCDSC, we present an algorithm based on Access con-trol conditions proxy re-encryption (ACC-PRE), which is proved to be master secret secure and Chosen-ciphertext attack (CCA) secure in random oracle model. We give the application protocols and make the comparisons between some existing approaches and UCDSC.  相似文献   

10.
王晓明  廖志委 《中国通信》2012,9(5):129-140
In order to support the dynamics of the privileged users with low computation, communication and storage overheads in receivers, a secure broadcast encryption scheme for ad hoc networks based on cluster-based structure is proposed, as Mu-Vmdharajan狆s scheme cannot securely remove sub-scribers with data redundancy. In the proposed scheme, we employ polynomial function and filter functions as the basic means of constructing broadcast encryption procedure in order to reduce computation and shortage overhead. Compared with existing schemes, our scheme requires low computation, communication and storage overheads in receivers and can support the dynamics of the privileged users. Furthermore, our scheme can avoid massive message to exchange for establishing the decryption key between members of the cluster. The analysis of security and performance shows that our scheme is more secure than Mu-Vmdharajan ' s scheme and has the same speed of encryption and decryption as theirs. So our scheme is particularly suitable for the devices with low power setting such as ad hoc networks.  相似文献   

11.
In a growing number of information processing applications, data takes the form of continuous data streams rather than traditional stored databases. Monitoring systems that seek to provide monitoring services in cloud environment must be prepared to deal gracefully with huge data collections without compromising system performance. In this paper, we show that by using a concept of urgent data, our system can shorten the response time for most 'urgent' queries while guarantee lower bandwidth consumption. We argue that monitoring data can be treated differently. Some data capture critical system events; the arrival of these data will significantly influence the monitoring reaction speed which is called urgent data. High speed urgent data collections can help system to react in real time when facing fatal errors. A cloud environment in production, MagicCube, is used as a test bed. Extensive experiments over both real world and synthetic traces show that when using urgent data, monitoring system can lower the response latency compared with existing monitoring approaches.  相似文献   

12.
with the increasing popularity of cloud services,attacks on the cloud infrastructure also increase dramatically.Especially,how to monitor the integrity of cloud execution environments is still a difficult task.In this paper,a real-time dynamic integrity validation(DIV) framework is proposed to monitor the integrity of virtual machine based execution environments in the cloud.DIV can detect the integrity of the whole architecture stack from the cloud servers up to the VM OS by extending the current trusted chain into virtual machine's architecture stack.DIV introduces a trusted third party(TTP) to collect the integrity information and detect remotely the integrity violations on VMs periodically to avoid the heavy involvement of cloud tenants and unnecessary information leakage of the cloud providers.To evaluate the effectiveness and efficiency of DIV framework,a prototype on KVM/QEMU is implemented,and extensive analysis and experimental evaluation are performed.Experimental results show that the DIV can efficiently validate the integrity of files and loaded programs in real-time,with minor performance overhead.  相似文献   

13.
In order to provide a practicable solution to data confidentiality in cloud storage service, a data assured deletion scheme, which achieves the fine grained access control, hopping and sniffing attacks resistance, data dynamics and deduplication, is proposed. In our scheme, data blocks are encrypted by a two-level encryption approach, in which the control keys are generated from a key derivation tree, encrypted by an All-Or- Nothing algorithm and then distributed into DHT network after being partitioned by secret sharing. This guarantees that only authorized users can recover the control keys and then decrypt the outsourced data in an owner- specified data lifetime. Besides confidentiality, data dynamics and deduplication are also achieved separately by adjustment of key derivation tree and convergent encryption. The analysis and experimental results show that our scheme can satisfy its security goal and perform the assured deletion with low cost.  相似文献   

14.
MapReduce has emerged as a popular computing model used in datacenters to process large amount of datasets.In the map phase,hash partitioning is employed to distribute data that sharing the same key across data center-scale cluster nodes.However,we observe that this approach can lead to uneven data distribution,which can result in skewed loads among reduce tasks,thus hamper performance of MapReduce systems.Moreover,worker nodes in MapReduce systems may differ in computing capability due to(1) multiple generations of hardware in non-virtualized data centers,or(2) co-location of virtual machines in virtualized data centers.The heterogeneity among cluster nodes exacerbates the negative effects of uneven data distribution.To improve MapReduce performance in heterogeneous clusters,we propose a novel load balancing approach in the reduce phase.This approach consists of two components:(1) performance prediction for reducers that run on heterogeneous nodes based on support vector machines models,and(2) heterogeneity-aware partitioning(HAP),which balances skewed data for reduce tasks.We implement this approach as a plug-in in current MapReduce system.Experimental results demonstrate that our proposed approach distributes work evenly among reduce tasks,and improves MapReduce performance with little overhead.  相似文献   

15.
Resource scheduling algorithm for ForCES (Forwarding and Control Element Separation) networks need to meet the flexibility, programmability and scalability of node resources. DBC (Deadline Budget Constrain) algorithm relies on users select cost or time priority, then scheduling to meet the requirements of users. However, this priority strategy of users is relatively simple, and cannot adapt to dynamic change of resources, it is inevitable to reduce the QoS. In order to improve QoS, we refer to the economic model and resource scheduling model of cloud computing, use SAL (Service Level Agreement) as pricing strategy, on the basis of DBC algorithm, propose an DABP (Deadline And Budget Priority based on DBC) algorithm for ForCES networks, DABP combines both budget and time priority to scheduling. In simulation and test, we compare the task finish time and cost of DABP algorithm with DP (Deadline Priority) algorithm and BP (Budget Priority) algorithm, the analysis results show that DABP algorithm make the task complete with less cost within deadline, benifical to load balancing of ForCES networks.  相似文献   

16.
Failure detection module is one of important components in fault-tolerant distributed systems, especially cloud platform. However, to achieve fast and accurate detection of failure becomes more and more difficult especially when network and other resources' status keep changing. This study presented an efficient adaptive failure detection mechanism based on volterra series, which can use a small amount of data for predicting. The mechanism uses a volterra filter for time series prediction and a decision tree for decision making. Major contributions are applying volterra filter in cloud failure prediction, and introducing a user factor for different QoS requirements in different modules and levels of IaaS. Detailed implementation is proposed, and an evaluation is performed in Beijing and Guangzhou experiment environment.  相似文献   

17.
Cloud computing systems play a vital role in national securi-ty.This paper describes a conceptual framework called dual-system architecture for protecting computing environments.While attempting to be logical and rigorous,formalism meth-od is avoided and this paper chooses algebra CommunicationSequential Process.  相似文献   

18.
This paper proposes an analytical mining tool for big graph data based on MapReduce and bulk synchronous parallel (BSP) com puting model. The tool is named Mapreduce and BSP based Graphmining tool (MBGM). The core of this mining system are four sets of parallel graphmining algorithms programmed in the BSP parallel model and one set of data extractiontransformationload ing (ETE) algorithms implemented in MapReduce. To invoke these algorithm sets, we designed a workflow engine which optimized for cloud computing. Finally, a welldesigned data management function enables users to view, delete and input data in the Ha doop distributed file system (HDFS). Experiments on artificial data show that the components of graphmining algorithm in MBGM are efficient.  相似文献   

19.
MTBAC: A Mutual Trust Based Access Control Model in Cloud Computing   总被引:1,自引:0,他引:1  
As a new computing mode, cloud computing can provide users with virtualized and scalable web services, which faced with serious security challenges, however. Access control is one of the most important measures to ensure the security of cloud computing. But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing. In cloud computing environment, only when the security and reliability of both interaction parties are ensured, data security can be effectively guaranteed during interactions between users and the Cloud. Therefore, building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment. Combining with Trust Management(TM), a mutual trust based access control (MTBAC) model is proposed in this paper. MTBAC model take both user's behavior trust and cloud services node's credibility into consideration. Trust relationships between users and cloud service nodes are established by mutual trust mechanism. Security problems of access control are solved by implementing MTBAC model into cloud computing environment. Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.  相似文献   

20.
In order to achieve fine-grained access control in cloud computing, existing digital rights management (DRM) schemes adopt attribute-based encryption as the main encryption primitive. However, these schemes suffer from inefficiency and cannot support dynamic updating of usage rights stored in the cloud. In this paper, we propose a novel DRM scheme with secure key management and dynamic usage control in cloud computing. We present a secure key management mechanism based on attribute-based encryption and proxy re-encryption. Only the users whose attributes satisfy the access policy of the encrypted content and who have effective usage rights can be able to recover the content encryption key and further decrypt the content. The attribute based mechanism allows the content provider to selectively provide fine-grained access control of contents among a set of users, and also enables the license server to implement immediate attribute and user revocation. Moreover, our scheme supports privacy-preserving dynamic usage control based on additive homomorphic encryption, which allows the license server in the cloud to update the users' usage rights dynamically without disclosing the plaintext. Extensive analytical results indicate that our proposed scheme is secure and efficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号