共查询到19条相似文献,搜索用时 31 毫秒
1.
CLEFIA,a new 128-bit block cipher proposed by Sony Corporation,is increasingly attracting cryptanalysts’ attention.In this paper,we present two new impossible differential attacks on 13 rounds of CLEFIA-128.The proposed attacks utilize a variety of previously known techniques,in particular the hash table technique and redundancy in the key schedule of this block cipher.The first attack does not consider the whitening layers of CLEFIA,requires 2 109.5 chosen plaintexts,and has a running time equivalent to about 2 112.9 encryptions.The second attack preserves the whitening layers,requires 2 117.8 chosen plaintexts,and has a total time complexity equivalent to about 2 121.2 encryptions. 相似文献
2.
The existing solutions to keyword search in the cloud can be divided into two categories: searching on exact keywords and searching on error-tolerant keywords. An error-tolerant keyword search scheme permits to make searches on encrypted data with only an approximation of some keyword. The scheme is suitable to the case where users' searching input might not exactly match those pre-set keywords. In this paper, we first present a general framework for searching on error-tolerant keywords. Then we propose a concrete scheme, based on a fuzzy extractor, which is proved secure against an adaptive adversary under well-defined security definition. The scheme is suitable for all similarity metrics including Hamming distance, edit distance, and set difference. It does not require the user to construct or store anything in advance, other than the key used to calculate the trapdoor of keywords and the key to encrypt data documents. Thus, our scheme tremendously eases the users' burden. What is more, our scheme is able to transform the servers' searching for error-tolerant keywords on ciphertexts to the searching for exact keywords on plaintexts. The server can use any existing approaches of exact keywords search to search plaintexts on an index table. 相似文献
3.
Clustering data streams has drawn lots of attention in the last few years due to their ever-growing presence. Data streams put additional challenges on clustering such as limited time and memory and one pass clustering. Furthermore, discovering clusters with arbitrary shapes is very important in data stream applications. Data streams are infinite and evolving over time, and we do not have any knowledge about the number of clusters. In a data stream environment due to various factors, some noise appears occasionally. Density-based method is a remarkable class in clustering data streams, which has the ability to discover arbitrary shape clusters and to detect noise. Furthermore, it does not need the nmnber of clusters in advance. Due to data stream characteristics, the traditional density-based clustering is not applicable. Recently, a lot of density-based clustering algorithms are extended for data streams. The main idea in these algorithms is using density- based methods in the clustering process and at the same time overcoming the constraints, which are put out by data streanFs nature. The purpose of this paper is to shed light on some algorithms in the literature on density-based clustering over data streams. We not only summarize the main density-based clustering algorithms on data streams, discuss their uniqueness and limitations, but also explain how they address the challenges in clustering data streams. Moreover, we investigate the evaluation metrics used in validating cluster quality and measuring algorithms' performance. It is hoped that this survey will serve as a steppingstone for researchers studying data streams clustering, particularly density-based algorithms. 相似文献
4.
Minimizing the Discrepancy Between Source and Target Domains by Learning Adapting Components简
下载免费PDF全文

Predicting the response variables of the target dataset is one of the main problems in machine learning. Predictive models are desired to perform satisfactorily in a broad range of target domains. However, that may not be plausible if there is a mismatch between the source and target domain distributions. The goal of domain adaptation algorithms is to solve this issue and deploy a model across different target domains. We propose a method based on kernel distribution embedding and Hilbert-Schmidt independence criterion (HSIC) to address this problem. The proposed method embeds both source and target data into a new feature space with two properties: 1) the distributions of the source and the target datasets are as close as possible in the new feature space, and 2) the important structural information of the data is preserved. The embedded data can be in lower dimensional space while preserving the aforementioned properties and therefore the method can be considered as a dimensionality reduction method as well. Our proposed method has a closed-form solution and the experimental results show that it works well in practice. 相似文献
5.
Recent years have witnessed a growing trend of Web services on the Interact. There is a great need of effective service recommendation mechanisms. Existing methods mainly focus on the properties of individual Web services (e.g., func- tional and non-functional properties) but largely ignore users' views on services, thus failing to provide personalized service recommendations. In this paper, we study the trust relationships between users and Web services using network modeling and analysis techniques. Based on the findings and the service network model we build, we then propose a collaborative filtering algorithm called Trust-Based Service Recommendation (TSR) to provide personalized service recommendations. This systematic approach for service network modeling and analysis can also be used for other service recommendation studies. 相似文献
6.
Improving Scalability of Cloud Monitoring Through PCA-Based Clustering of Virtual Machines简
下载免费PDF全文

Cloud computing has recently emerged as a leading paradigm to allow customers to run their applications in virtualized large-scale data centers. Existing solutions for monitoring and management of these infrastructures consider virtual machines (VMs) as independent entities with their own characteristics. However, these approaches suffer from scalability issues due to the increasing number of VMs in modern cloud data centers. We claim that scalability issues can bc addressed by leveraging the similarity among VMs behavior in terms of resource usage patterns. In this paper we propose an automated methodology to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. The innovative contribution of the proposed methodology is the use of the statistical technique known as principal component analysis (PCA) to automatically select the most relevant information to cluster similar VMs. We apply the methodology to two case studies, a virtualized testbed and a real enterprise data center. In both case studies, the automatic data selection based on PCA allows us to achieve high performance, with a percentage of correctly clustered VMs between 80% and 100% even for short time series (1 day) of monitored data. Furthermore, we estimate the potential reduction in the amount of collected data to demonstrate how our proposal may address the scalability issues related to monitoring and management in cloud computing data centers. 相似文献
7.
8.
9.
10.
11.
LBlock算法是2011年提出的轻量级分组密码,适用于资源受限的环境.目前,关于LBlock最好的分析结果为基于14轮不可能差分路径和15轮的相关密钥不可能差分路径,攻击的最高轮数为22轮.为研究LBlock算法抵抗不可能差分性质,结合密钥扩展算法的特点和轮函数本身的结构,构造了新的4条15轮相关密钥不可能差分路径.将15轮差分路径向前扩展4轮、向后扩展3轮,分析了22轮LBlock算法.在已有的相关密钥不可能差分攻击的基础上,深入研究了轮函数中S盒的特点,使用2类相关密钥不可能差分路径.基于部分密钥分别猜测技术降低计算量,分析22轮LBlock所需数据量为261个明文,计算量为259.58次22轮加密. 相似文献
12.
13.
LBlock是一种轻量级分组密码算法,其由于优秀的软硬件实现性能而备受关注。目前针对LBlock的安全性研究多侧重于抵御传统的数学攻击。缓存( Cache)攻击作为一种旁路攻击技术,已经被证实对密码算法的工程实现具有实际威胁,其中踪迹驱动Cache攻击分析所需样本少、分析效率高。为此,根据LBlock的算法结构及密钥输入特点,利用访问Cache过程中密码泄露的旁路信息,给出针对LBlock算法的踪迹驱动Cache攻击。分析结果表明,该攻击选择106个明文,经过约27.71次离线加密时间即可成功恢复LBlock的全部密钥。与LBlock侧信道立方攻击和具有Feistel结构的DES算法踪迹驱动Cache攻击相比,其攻击效果更明显。 相似文献
14.
不可能差分是对分组密码的一种有效攻击方法.它是寻找不可能出现的差分关系,并排除满足这种关系的密钥,最终恢复出秘密密钥.分析了韩国新型分组密码算法ARIA的不可能差分.首先分析了ARIA混淆层的特性,构造了ARIA的4轮不可能差分,选择225.5个明文对,使其密文异或具有低64b为零的形式,利用4轮不可能差分特性对5轮的ARIA进行了分析.选择230个明文对对6轮ARIA进行分析. 相似文献
15.
16.
GRANULE算法是一个超轻量分组密码算法,有着较好的软硬件实现性能,但目前尚没有该算法在不可能差分分析下的安全性评估结果。为此,利用中间相错技术,找到GRANULE64算法多条5轮不可能差分区分器,并基于得到的区分器,向上、下分别扩展3轮,给出对GRANULE64/80算法的11轮不可能差分分析。通过该算法可以恢复80-bit主密钥,时间复杂度为2~(73.3)次11轮GRANULE64算法加密,数据复杂度为2~(64)个选择明文。 相似文献
17.
LBlock是吴文玲和张蕾在ACNS2011上提出的轻量级分组密码,目前未见对该体制的公开的密码分析文章。用符号计算软件Mathematica7.0得到了LBlock第三轮输出的最低比特的代数表达式(以明文和主密钥比特为自变量)。该代数表达式只与8比特密钥和9比特明文有关,可以用于对LBlock在单比特泄露模型下的侧信道攻击。模拟实验表明,假设第三轮输出的最低比特泄露,则用8个已知明文,85%的概率下可以恢复6~7比特密钥。 相似文献
18.
缩减轮数PRESENT算法的Biclique分析 总被引:1,自引:0,他引:1
轻量级分组密码算法PRESENT由于其出色的硬件实现性能和简洁的轮函数设计,一经提出便引起了工业界与学术界的广泛关注.文中作者基于Biclique分析方法,首次提出针对21轮PRESENT-80算法的Biclique密钥恢复攻击方法.该攻击方法需要278.9的计算复杂度和264的数据复杂度.此外,针对PRESENT-80的Bielique攻击也可推广到相同轮数的PRESENT-128和DM-PRESENT压缩函数的安全性分析.与其它已公开密码学安全性分析结果相比,作者提出的Bielique攻击在内存复杂度上具有一定的优势. 相似文献
19.
目前资源受限环境的应用场景越来越多,该场景下的数据加密需求也随之增加。以国际标准PRESENT算法为代表的一大批轻量级分组密码应运而生。PFP算法是一种基于Feistel结构的超轻量级分组密码算法,它的轮函数设计借鉴了国际标准PRESENT算法的设计思想。PFP算法的分组长度为64比特,密钥长度为80比特,迭代轮数为34轮。针对PFP算法,研究了其抵抗不可能差分分析的能力。在该算法的设计文档中,设计者利用5轮不可能差分区分器攻击6轮的PFP算法,能够恢复32比特的种子密钥。与该结果相比,文中通过研究轮函数的具体设计细节,利用S盒的差分性质构造出7轮不可能差分区分器,并攻击9轮的PFP算法,能够恢复36比特的种子密钥。该结果无论在攻击轮数还是恢复的密钥量方面,均优于已有结果,是目前PFP算法最好的不可能差分分析结果。 相似文献