首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
赵艳敏  刘瑜  王美琴 《软件学报》2018,29(9):2821-2828
差分分析和线性分析是重要的密码算法分析工具.多年来,很多研究者致力于改善这两种攻击方法.Achiya Bar-On等人提出了一种方法,能够使攻击者对部分状态参与非线性变换的SPN结构的密码算法进行更多轮数的差分分析和线性分析.这种方法使用了两个辅助矩阵,其目的就是更多地利用密码算法中线性层的约束,从而能攻击更多轮数.将这种方法应用到中国密码算法SMS4的多差分攻击中,获得了一个比现有攻击存储复杂度更低和数据复杂度更少的攻击结果.在成功概率为0.9时,实施23轮的SMS4密钥恢复攻击需要2113.5个明文,时间复杂度为2126.7轮等价的23轮加密.这是目前为止存储复杂度最低的攻击,存储复杂度为217个字节.  相似文献   

2.
分析以往基于格的群签名方案,虽能有效抵抗量子攻击,但都存在计算复杂度高、通信代价大和系统公钥尺寸过大的弱点,而NRTU格是一类基于多项式环的特殊格,因只涉及多项式环上的乘法和小整数求模运算,与一般格相比,NTRU格密码体制所需公私钥长度更短,运算速度更快。方案中使用NTRU格上高效的参数生成算法,构建了一个新的基于NTRU格的群签名方案,缩短了系统公钥长度,且系统公钥、追踪密钥和签名密钥之间可并行计算,使得计算效率更高,降低了通信代价。方案安全性归约至判定性LWE问题和近似CVP问题的难解性,并给出方案详细的效率分析。  相似文献   

3.
李永光  曾光  韩文报 《计算机科学》2015,42(11):217-221
Crypton密码算法是韩国学者提出的一种AES候选算法。通过研究Crypton算法的结构特征和一类截断差分路径的性质,利用差分枚举技术权衡存储复杂度和数据复杂度,提出了4轮和4.5轮中间相遇区分器。新的区分器减少了预计算表中的多重集数量,降低了存储复杂度。基于4轮区分器首次给出对7轮Crypton-128的中间相遇攻击,时间复杂度为2113,数据复杂度为2113,存储复杂度为290.72。基于4.5轮区分器首次给出对8轮Crypton-192的中间相遇攻击,时间复杂度为2172,数据复杂度为2113,存储复杂度为2138。  相似文献   

4.
俞惠芳  付帅凤 《软件学报》2021,32(9):2935-2944
盲签名是一种特殊的数字签名,可广泛应用于各种匿名场合.目前,大多数盲签名的安全性主要基于大整数分解问题或离散对数问题的难解性.然而,实用量子计算机的即将诞生会使得传统的盲签名不再安全,而且量子算法的出现对传统的盲签名亦提出了挑战.因此,构造能够防御量子计算攻击的盲签名方案具有重要的意义.多变量公钥密码是后量子密码的主要候选者之一.在多变量公钥密码和盲签名的理论基础上,设计了一种新颖的多变量公钥密码体制下的盲签名方案.该密码方案借助另一非线性可逆变换LFrFr将签名的公私钥分离,减少了公私钥之间的线性关系,提高了盲签名的安全性.分析表明:该密码方案不仅具有盲性、不可伪造性和不可追踪性,而且还具有计算复杂度低及抗量子计算攻击的优点.  相似文献   

5.
Zodiac算法是一种由一批韩国学者设计的分组密码算法,它是16轮平衡Feistel型的分组密码。首次从零相关-积分分析的角度评价了Zodiac算法的安全性,构造出算法的两类13轮零相关线性逼近,并据此给出了13轮零相关-积分区分器,对全轮Zodiac算法进行了零相关-积分分析,成功恢复出了144bit轮子密钥信息。结果显示:完整16 轮Zodiac-128/192/256算法的零相关-积分攻击的数据复杂度为2120个选择明文,时间复杂度大约为282次16轮Zodiac算法加密,时间复杂度明显优于已有的积分攻击结果。  相似文献   

6.
对一个背包公钥密码的格攻击*   总被引:2,自引:1,他引:1  
对一个新的基于Merkle-Hellman背包密码和Rabin公钥密码的背包公钥密码算法进行了安全性分析。使用格规约算法求解一个联立丢番图逼近问题和一个二元整数线性规划问题就恢复出了该密码算法的部分密钥。重构的部分密钥可以解密任意密文。因此,该背包公钥密码算法是不安全的。  相似文献   

7.
现有的对于Piccolo算法的安全性分析结果中,除Biclique分析外,以低于穷举搜索的复杂度最长仅攻击至14轮Piccolo-80和18轮Piccolo-128算法.通过分析Piccolo算法密钥扩展的信息泄漏规律,结合算法等效结构,利用相关密钥-不可能差分分析方法,基于分割攻击思想,分别给出了15轮Piccolo-80和21轮Piccolo-128含前向白化密钥的攻击结果.当选择相关密钥量为28时,攻击所需的数据复杂度分别为258.6和262.3,存储复杂度分别为260.6和264.3,计算复杂度分别为278和282.5;在选择相关密钥量为24时,攻击所需的数据复杂度均为262.6和262.3,存储复杂度分别为264.6和264.3,计算复杂度分别为277.93和2124.45.分析结果表明,仅含前向白化密钥的15轮Piccolo-80算法和21轮Piccolo-128算法在相关密钥-不可能差分攻击下是不安全的.  相似文献   

8.
提出了针对轻量级分组密码算法 MIBS-80 的 Biclique 分析.利用两条独立的相关密钥差分路径,构造了4轮维度为4 的 Biclique 结构,在此基础上对密钥空间进行了划分,结合预计算技术,对每一个密钥子空间进行筛选以降低中间相遇攻击所需的计算复杂度,实施了对12 轮 MIBS-80 的密钥恢复攻击.攻击的数据复杂度为252个选择明文,计算复杂度约为277.13次12 轮 MIBS-80 加密,存储复杂度约为28.17,成功实施攻击的概率为1.与已有攻击方法相比,在存储复杂度及成功率方面具有优势.  相似文献   

9.
高红杰  卫宏儒 《计算机科学》2017,44(10):147-149, 181
轻量级分组密码算法ESF是一种具有广义Feistel结构的32轮迭代型分组密码,轮函数具有SPN结构,分组长度为64比特,密钥长度为80比特。为了研究ESF算法抵抗不可能差分攻击的能力,基于一条8轮不可能差分路径,根据轮密钥之间的关系,通过向前增加2轮、向后增加2轮的方式,对12轮ESF算法进行了攻击。计算结果表明,攻击12轮ESF算法所需的数据复杂度为O(253),时间复杂度为O(260.43),由此说明12轮的ESF算法对不可能差分密码分析是不免疫的。  相似文献   

10.
为应对量子计算对区块链上基于数论的隐私保护技术所带来的威胁,将区块链技术与格属性基加密算法有效融合,提出一种基于格的后量子CPABE区块链数据共享方案。将容错学习(LWE)作为方案的困难问题假设,构造一种基于格的密文策略属性基加密算法LWE-CPABE,抵御量子计算对公钥密码安全的攻击,实现数据的安全共享。设计算法参数的标准格式化交易结构,以满足LWE-CPABE算法的可追责性。在此基础上,给出交易生成与交易验证智能合约,以实现交易的自动验证与共识。功能性分析与仿真实验结果表明,该方案在算法初始化、加解密以及密钥生成的计算效率方面均优于传统的基于双线性映射理论的CPABE方案,可实现区块链上数据的高效、安全、动态共享与隐私保护,明显提高区块链数据共享安全性。  相似文献   

11.
In this paper, we present efficient algorithms for the inversion of a triangular matrix on different interconnection networks. For hypercubes, we describe an elegant straightforward implementation of L. Csansky's well known PRAM algorithm [Ph.D. dissertation, Computer Sci. Div., Univ. of California, Berkeley 1974]. The time complexity is (log2n) usingn3processors, i.e., within the same order as the PRAM algorithm. Moreover, we give a general approach for the design of triangular matrix inversion algorithms on a large class of networks. Applied to some of these networks, as, e.g., the de Bruijn network, the shuffle-exchange network, and the cube-connected-cycles, this approach yields triangular matrix inversion algorithms that meet the PRAM complexity bounds of the problem within a small constant.  相似文献   

12.
In this paper, we consider multi-objective evolutionary algorithms for the Vertex Cover problem in the context of parameterized complexity. We consider two different measures for the problem. The first measure is a very natural multi-objective one for the use of evolutionary algorithms and takes into account the number of chosen vertices and the number of edges that remain uncovered. The second fitness function is based on a linear programming formulation and proves to give better results. We point out that both approaches lead to a kernelization for the Vertex Cover problem. Based on this, we show that evolutionary algorithms solve the vertex cover problem efficiently if the size of a minimum vertex cover is not too large, i.e., the expected runtime is bounded by O(f(OPT)?n c ), where c is a constant and f a function that only depends on OPT. This shows that evolutionary algorithms are randomized fixed-parameter tractable algorithms for the vertex cover problem.  相似文献   

13.
Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms.(1) Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms.(2) In this paper, we give parallel algorithms for solving node and arc consistency. We show that any parallel algorithm for enforcing are consistency in the worst case must have O(na) sequential steps, wheren is number of nodes, anda is the number of labels per node. We give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.This work was partially supported by NSF Grants MCS-8221750, DCR-8506393, and DMC-8502115.  相似文献   

14.
We give a specific method to solve with quadratic complexity the linear systems arising in known algorithms to deal with the sign determination problem, both in the univariate and multivariate setting. In particular, this enables us to improve the complexity bound for sign determination in the univariate case to O(sd2log3d), where s is the number of polynomials involved and d is a bound for their degree. Previously known complexity results involve a factor of d2.376.  相似文献   

15.
Max Restricted Path Consistency (maxRPC) is a local consistency for binary constraints that enforces a higher order of consistency than arc consistency. Despite the strong pruning that can be achieved, maxRPC is rarely used because existing maxRPC algorithms suffer from overheads and redundancies as they can repeatedly perform many constraint checks without triggering any value deletions. In this paper we propose and evaluate techniques that can boost the performance of maxRPC algorithms by eliminating many of these overheads and redundancies. These include the combined use of two data structures to avoid many redundant constraint checks, and the exploitation of residues to quickly verify the existence of supports. Based on these, we propose a number of closely related maxRPC algorithms. The first one, maxRPC3, has optimal O(end 3) time complexity, displays good performance when used stand-alone, but is expensive to apply during search. The second one, maxRPC3 rm , has O(en 2 d 4) time complexity, but a restricted version with O(end 4) complexity can be very efficient when used during search. The other algorithms are simple modifications of maxRPC3 rm . All algorithms have O(ed) space complexity when used stand-alone. However, maxRPC3 has O(end) space complexity when used during search, while the others retain the O(ed) complexity. Experimental results demonstrate that the resulting methods constantly outperform previous algorithms for maxRPC, often by large margins, and constitute a viable alternative to arc consistency on some problem classes.  相似文献   

16.
Consistency techniques for continuous constraints   总被引:1,自引:0,他引:1  
We consider constraint satisfaction problems with variables in continuous, numerical domains. Contrary to most existing techniques, which focus on computing one single optimal solution, we address the problem of computing a compact representation of the space of all solutions admitted by the constraints. In particular, we show how globally consistent (also called decomposable) labelings of a constraint satisfaction problem can be computed.Our approach is based on approximating regions of feasible solutions by 2 k -trees, a representation commonly used in computer vision and image processing. We give simple and stable algorithms for computing labelings with arbitrary degrees of consistency. The algorithms can process constraints and solution spaces of arbitrary complexity, but with a fixed maximal resolution.Previous work has shown that when constraints are convex and binary, path-consistency is sufficient to ensure global consistency. We show that for continuous domains, this result can be generalized to ternary and in fact arbitrary n-ary constraints using the concept of (3,2)-relational consistency. This leads to polynomial-time algorithms for computing globally consistent labelings for a large class of constraint satisfaction problems with continuous variables.  相似文献   

17.
Most of the problems involving the design and plan of manufacturing systems are combinatorial and NP-hard. A well-known manufacturing optimization problem is the assembly line balancing problem (ALBP). Due to the complexity of the problem, in recent years, a growing number of researchers have employed genetic algorithms. In this article, a survey has been conducted from the recent published literature on assembly line balancing including genetic algorithms. In particular, we have summarized the main specifications of the problems studied, the genetic algorithms suggested and the objective functions used in evaluating the performance of the genetic algorithms. Moreover, future research directions have been identified and are suggested.  相似文献   

18.
Inner product encryption (IPE) is an important research area of functional cryptosystems. It can improve user access control and fine-grained query, and has a wide range of applications in emerging fields such as cloud computing. Lattice-based inner product encryption has the advantages of resistance to quantum algorithm attacks and relatively simple encryption algorithms, and thus has good application prospects. Currently, the provable security of lattice-based public key encryption schemes is mostly based on the learning with errors (LWE) and polynomial learning with errors (PLWE) problems. However, most of the encryption schemes based on LWE problems suffer from the problem of oversized public keys and ciphers. Although encryption schemes based on the PLWE problem further reduce the size, their hardness is limited by polynomials and the security guarantees are weakened. Compared with the LWE and PLWE problems, the Middle-Product Learning With Errors (MP-LWE) problem relaxes the polynomial restriction and makes full use of the polynomial features to achieve a better balance between security and efficiency. Inspired by this, we have proposed the inner product encryption scheme of Sel-IND-CPA-secure in SocialSec 2022. In this paper, to extend the existing work and improve the functionality of IPE, we construct single-input and multi-input inner product encryption schemes of Ad-IND-CPA-secure, and also evaluate the efficiency of the schemes.  相似文献   

19.
Yang Yu 《Artificial Intelligence》2008,172(15):1809-1832
Evolutionary algorithms (EA) have been shown to be very effective in solving practical problems, yet many important theoretical issues of them are not clear. The expected first hitting time is one of the most important theoretical issues of evolutionary algorithms, since it implies the average computational time complexity. In this paper, we establish a bridge between the expected first hitting time and another important theoretical issue, i.e., convergence rate. Through this bridge, we propose a new general approach to estimating the expected first hitting time. Using this approach, we analyze EAs with different configurations, including three mutation operators, with/without population, a recombination operator and a time variant mutation operator, on a hard problem. The results show that the proposed approach is helpful for analyzing a broad range of evolutionary algorithms. Moreover, we give an explanation of what makes a problem hard to EAs, and based on the recognition, we prove the hardness of a general problem.  相似文献   

20.
In this paper we describe how to apply fine grain parallelism to augmenting path algorithms for the dense linear assignment problem. We prove by doing that the technique we suggest, can be efficiently implemented on commercial available, massively parallel computers. Using n processors, our method reduces the computational complexity from the sequentialO(n 3) to the parallel complexity ofO(n 2). Exhaustive experiments are performed on a Maspar MP-2 in order to determine which of the algorithmic flavors that fits best onto this kind of architecture.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号