首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   77篇
  免费   19篇
  国内免费   13篇
综合类   6篇
无线电   21篇
一般工业技术   1篇
自动化技术   81篇
  2024年   4篇
  2023年   3篇
  2022年   6篇
  2021年   2篇
  2020年   15篇
  2019年   8篇
  2018年   9篇
  2017年   12篇
  2016年   11篇
  2015年   8篇
  2014年   11篇
  2013年   5篇
  2012年   9篇
  2011年   2篇
  2010年   1篇
  2008年   1篇
  2007年   2篇
排序方式: 共有109条查询结果,搜索用时 15 毫秒
1.
针对单一云存储服务提供商可能对数据进行垄断控制和现有云存储去重系统采用的收敛加密算法容易遭受暴力攻击等问题,提出了一种采用签名与哈希技术的云存储去重方案,通过在数据去重过程中采用双层校验机制对数据完整性进行审计,能够校验文件的完整性和精确地定位到损坏的数据块;同时构造Merkle哈希树来生成校验值,计算出去重标签,保证重复数据能够被检测;使用Mapbox和Lockbox结合的机制加密数据信息,保证非授权用户无法对文件进行访问。安全性分析及仿真实验结果表明,方案有效抵制暴力攻击,并能够降低去重标签的计算开销和减少存储空间。  相似文献   
2.
基于分片复用的多版本容器镜像加载方法   总被引:1,自引:0,他引:1       下载免费PDF全文
陆志刚  徐继伟  黄涛 《软件学报》2020,31(6):1875-1888
容器将应用和支持软件、库文件等封装为镜像,通过发布新版本镜像实现应用升级,导致不同版本之间存在大量相同数据.镜像加载消耗大量时间,使容器启动时间从毫秒级延迟为秒级甚至是分钟级.复用不同版本之间的相同数据,有利于减少容器加载时间.当前,容器镜像采用继承和分层加载机制,有效实现了支持软件、库文件等数据的复用,但对于应用内部数据还没有一种可靠的复用机制.提出一种基于分片复用的多版本容器镜像加载方法,通过复用不同版本镜像之间的相同数据,提升镜像加载效率.方法的核心思想是:利用边界匹配数据块切分方法将容器镜像切分为细粒度数据块,将数据块哈希值作为唯一标识指纹,借助B-树搜索重复指纹判断重复数据块,减少数据传输.实验结果表明,该方法可以提高5.8X以上容器镜像加载速度.  相似文献   
3.
在实行客户端去重的云存储系统中,通过所有权证明可以解决攻击者仅凭借文件摘要获得整个文件的问题。然而,基于所有权证明的去重方案容易遭受侧信道攻击。攻击者通过上传文件来观察是否发生去重,即可判断该文件是否存在于云服务器中。基于存储网关提出一种改进的所有权证明去重方案,存储网关代替用户与云服务器进行交互,使得去重过程对用户透明,并采用流量混淆的方法抵抗侧信道攻击和关联文件攻击。分析与比较表明,该方案降低了客户端计算开销,并提高了安全性。  相似文献   
4.
针对当前密文数据去重挖掘方法存在去重效果较差、特征聚合能力低的问题,提出一种密钥共享下跨用户密文数据去重挖掘方法.结合非线性统计序列分析方法对密钥共享下跨用户密文数据的统计特征进行采样,通过识别不同领域的统计特征进行密文数据的线性编码设计,抽取密钥共享下跨用户密文数据的平均互信息特征量.采用匹配滤波方法实现密钥共享下跨...  相似文献   
5.
一种基于重复数据删除技术的云中云存储系统   总被引:1,自引:0,他引:1       下载免费PDF全文
随着云存储技术的快速发展和应用,越来越多的企业和用户都开始将数据从本地转移到云存储服务提供商进行存储.但是,在享受云存储高质量服务的同时,将数据仅仅存储于单个云存储服务商中会带来一定的风险,例如云存储服务提供商的垄断、数据可用性和安全性等问题.为了解决这个问题,提出了一种基于重复数据删除技术的云中云存储系统架构,首先消除云存储系统中的冗余数据量,然后基于重复数据删除集中的数据块引用率将数据块以复制和纠删码2种数据布局方式存储在多个云存储服务提供商中.基于复制的数据布局方式易于实现部署,但是存储开销大;基于纠删码的数据布局方式存储开销小,但是需要编码和解码,计算开销大.为了充分挖掘复制和纠删码数据布局的优点并结合重复数据删除技术中数据引用的特点,新方法用复制方式存储高引用数据块,用纠删码方式存储其他数据块,从而使系统整体性能和成本达到较优.通过原型系统的实现和测试验证了相比现有云中云存储策略,新方法在性能和成本上都有大幅度提高.  相似文献   
6.
    
Better duplicate elimination performance causes higher fragmentation which leads to degraded restore performance. As a result, restore performance needs to be optimized either through strengthening locality by selective rewriting and/or making the best use of the limited available memory through cache optimization. In this paper, we explore SACRO, S SD A ssitsted Chunk C aching for R estore O ptimization. It avoids the need to repetitively access chunk containers in disk by using SSD (Solid State Drives) as a secondary chunk cache. An FRT (Future Reference Table) is constructed from the recipe of a backup stream and a Future Reference Count entry in the FRT is utilized to assign priorities to chunks as they are accessed during restoration. These priority values coupled with access distances are used to decide to cache a chunk either in memory or the SSD. The restore performance of SACRO is shown to significantly outperform other restoring approaches which utilize chunk caching. For one dataset, at 4 MB cache size and 4 MB Look Ahead Window size its restore factor is 3.5 MB/container_access while ALACC and LRU record 3.4 MB/container_access and 0.9 MB/container_access respectively. Moreover, SACRO can be integrated to already existing deduplication systems with very limited amount of modification done on the deduplication system.  相似文献   
7.
    
Chunking is a process to split a file into smaller files called chunks. In some applications, such as remote data compression, data synchronization, and data deduplication, chunking is important because it determines the duplicate detection performance of the system. Content-defined chunking (CDC) is a method to split files into variable length chunks, where the cut points are defined by some internal features of the files. Unlike fixed-length chunks, variable-length chunks are more resistant to byte shifting. Thus, it increases the probability of finding duplicate chunks within a file and between files. However, CDC algorithms require additional computation to find the cut points which might be computationally expensive for some applications. In our previous work (Widodo et al., 2016), the hash-based CDC algorithm used in the system took more process time than other processes in the deduplication system. This paper proposes a high throughput hash-less chunking method called Rapid Asymmetric Maximum (RAM). Instead of using hashes, RAM uses bytes value to declare the cut points. The algorithm utilizes a fix-sized window and a variable-sized window to find a maximum-valued byte which is the cut point. The maximum-valued byte is included in the chunk and located at the boundary of the chunk. This configuration allows RAM to do fewer comparisons while retaining the CDC property. We compared RAM with existing hash-based and hash-less deduplication systems. The experimental results show that our proposed algorithm has higher throughput and bytes saved per second compared to other chunking algorithms.  相似文献   
8.
    
Cloud storage is essential for managing user data to store and retrieve from the distributed data centre. The storage service is distributed as pay a service for accessing the size to collect the data. Due to the massive amount of data stored in the data centre containing similar information and file structures remaining in multi-copy, duplication leads to increase storage space. The potential deduplication system doesn’t make efficient data reduction because of inaccuracy in finding similar data analysis. It creates a complex nature to increase the storage consumption under cost. To resolve this problem, this paper proposes an efficient storage reduction called Hash-Indexing Block-based Deduplication (HIBD) based on Segmented Bind Linkage (SBL) Methods for reducing storage in a cloud environment. Initially, preprocessing is done using the sparse augmentation technique. Further, the preprocessed files are segmented into blocks to make Hash-Index. The block of the contents is compared with other files through Semantic Content Source Deduplication (SCSD), which identifies the similar content presence between the file. Based on the content presence count, the Distance Vector Weightage Correlation (DVWC) estimates the document similarity weight, and related files are grouped into a cluster. Finally, the segmented bind linkage compares the document to find duplicate content in the cluster using similarity weight based on the coefficient match case. This implementation helps identify the data redundancy efficiently and reduces the service cost in distributed cloud storage.  相似文献   
9.
随着多云存储市场的快速发展, 越来越多的用户选择将数据存储在云上, 随之而来的是云环境中的重复数据也呈爆炸式增长. 由于云服务代理是相互独立的, 因此传统的数据去重只能消除代理本身管理的几个云服务器上的冗余数据. 为了进一步提高云环境中数据去重的力度, 本文提出了一种多代理联合去重方案. 通过区块链技术促成云服务代理间的合作, 并构建代理联盟, 将数据去重的范围从单个代理管理的云扩大到多代理管理的多云. 同时, 能够为用户、云服务代理和云服务提供商带来利益上的共赢. 实验表明, 多代理联合去重方案可以显著提高数据去重效果、节约网络带宽.  相似文献   
10.
    
The tremendous development of cloud computing with related technologies is an unexpected one. However, centralized cloud storage faces few challenges such as latency, storage, and packet drop in the network. Cloud storage gets more attention due to its huge data storage and ensures the security of secret information. Most of the developments in cloud storage have been positive except better cost model and effectiveness, but still data leakage in security are billion-dollar questions to consumers. Traditional data security techniques are usually based on cryptographic methods, but these approaches may not be able to withstand an attack from the cloud server's interior. So, we suggest a model called multi-layer storage (MLS) based on security using elliptical curve cryptography (ECC). The suggested model focuses on the significance of cloud storage along with data protection and removing duplicates at the initial level. Based on divide and combine methodologies, the data are divided into three parts. Here, the first two portions of data are stored in the local system and fog nodes to secure the data using the encoding and decoding technique. The other part of the encrypted data is saved in the cloud. The viability of our model has been tested by research in terms of safety measures and test evaluation, and it is truly a powerful complement to existing methods in cloud storage.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号