首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
AES差分故障攻击的建模与分析   总被引:1,自引:1,他引:0       下载免费PDF全文
研究高级加密标准(AES)密码算法对差分故障攻击的安全性。攻击采用针对密钥扩展算法的单字节随机故障模型,通过对比正确和错误密文的差异恢复种子密钥。该攻击方法理论上仅需104个错误密文和2个末轮子密钥字节的穷举搜索就可完全恢复AES的128比特种子密钥。故障位置的不均匀分布使实际攻击所需错误密文数与理论值略有不同。  相似文献   

2.
针对不同种子密钥长度的RC4算法的明文恢复问题,提出了对经过不同种子密钥长度(8字节、16字节、22字节)的RC4算法加密的明文的明文恢复攻击。首先利用统计算法在232个不同种子密钥的条件下统计了RC4算法每个密钥流输出字节的t值分布,发现了RC4算法密钥流输出序列存在偏差;然后,利用单字节偏差规律和双字节偏差规律给出了对经RC4算法加密的明文的前256字节的攻击算法。实验结果表明,在密文量为231的条件下,除了第4字节外,攻击算法能够以100%的成功率恢复明文的前196字节。对于种子密钥长度为8字节的RC4算法,前256字节的恢复成功率都超过了91%;相应的,种子密钥长度为16字节的RC4算法,前256字节的恢复成功率都超过87%;种子密钥长度为22字节的RC4算法,前256字节的恢复成功率都超过了81%。所提攻击算法拓展了原有攻击密钥长度为16字节的RC4算法的范围,且在实际应用中能够更好地恢复经RC4算法加密的明文。  相似文献   

3.
The byte stream is widely used in malware detection due to its independence of reverse engineering. However, existing methods based on the byte stream implement an indiscriminate feature extraction strategy, which ignores the byte function difference in different segments and fails to achieve targeted feature extraction for various byte semantic representation modes, resulting in byte semantic confusion. To address this issue, an enhanced adversarial byte function associated method for malware backdoor attack is proposed in this paper by categorizing various function bytes into three functions involving structure, code, and data. The Minhash algorithm, grayscale mapping, and state transition probability statistics are then used to capture byte semantics from the perspectives of text signature, spatial structure, and statistical aspects, respectively, to increase the accuracy of byte semantic representation. Finally, the three-channel malware feature image is constructed based on different function byte semantics, and a convolutional neural network is applied for detection. Experiments on multiple data sets from 2018 to 2021 show that the method can effectively combine byte functions to achieve targeted feature extraction, avoid byte semantic confusion, and improve the accuracy of malware detection.  相似文献   

4.
在编程实践中,对于信息的显示、传输、加密与解密等方面的问题,经常需要压缩信息,即编码及其对应的处理.采用Visual Basic 2008工具,设计了一个数据编码与处理模块,实现了字符(串)、字节、16进制字符串、Unicode编码以及随机字节(数组)的生成与位操作技术,这些技术可以直接应用于计算机监控系统、中文短信以...  相似文献   

5.
《Micro, IEEE》1990,10(3):9-21
The issue of data exchange between type-1 and type-2 buses, which multiplex the first data byte (which has the lowest address) with the least and most significant portions of the address, respectively, is considered. In an analogy based on Gulliver's Travels, the associated architectures have been dubbed little-endian and big-endian processors, respectively. It is pointed out that the byte order within integers and the byte order during transmission can differ. Therefore, the big and little adjectives are used to describe the order of bytes within integers, and the acronyms Mad and Sad to describe the order of bytes (most versus least important first) during transmission on a multiplexed address-data bus. After a review of the endian ordering issues, it is concluded that big- and little-endians can use the same bus standard. For high-performance serialized buses, the mad-endian order seems superior to a sad-endian order. For consistency between serialized and multiplexed parallel buses of various widths, the mad-endian order is proposed for future multiplexed standards. To minimize the interface costs to mad-endian buses, a big-endian order is proposed for shared data also  相似文献   

6.
在对恶意代码进行检测和分类时,由于传统的灰度编码方法将特征转换为图像的过程中,会产生特征分裂和精度损失等问题,严重影响了恶意代码的检测性能.同时,传统的恶意代码检测和分类的数据集中只使用了单一的恶意样本,并没有考虑到良性样本.因此,文中采用了一个包含良性样本和恶意样本的数据集,同时提出了一种双字节特征编码方法.首先将待...  相似文献   

7.
This paper discusses a new idea for improving the recovery of (n, k, m) convolutional-code-based packet-level FEC. We extend the coefficients in the generator matrix to the elements on GF(28). Each byte in the redundant packets is generated independently, assuming the bytes are the elements on GF(28). The postreconstruction receiving rate is analytically derived from the necessary conditions for recovery. Moreover, the ability to recover lost packets is estimated based on analysis, and we demonstrate that the extension of coefficients improves the number of recovered packets more than the conventional method where coefficients are 0 or 1.  相似文献   

8.
计算机系统普遍采用较宽的存储器总线结构以提高吞吐率,这导致了以字节为基本单位读写数据的不便。通过对主流SDRAM存储器的访问机制的分析,为SDRAM存储控制器设计了一种数据自动对齐方案,该方案能够根据输入的字节地址和数据自动产生所需的存储器控制信号,并将数据对齐到正确的字节位置。详细阐述了生成地址、字节控制信号以及进行字节对齐的原理与方法,并在FPGA芯片上实现。与基于Cache以及基于两次存储器读写操作的方案相比较,新方案占用硬件资源更少,存储器带宽利用率更高。  相似文献   

9.
Chunking is a process to split a file into smaller files called chunks. In some applications, such as remote data compression, data synchronization, and data deduplication, chunking is important because it determines the duplicate detection performance of the system. Content-defined chunking (CDC) is a method to split files into variable length chunks, where the cut points are defined by some internal features of the files. Unlike fixed-length chunks, variable-length chunks are more resistant to byte shifting. Thus, it increases the probability of finding duplicate chunks within a file and between files. However, CDC algorithms require additional computation to find the cut points which might be computationally expensive for some applications. In our previous work (Widodo et al., 2016), the hash-based CDC algorithm used in the system took more process time than other processes in the deduplication system. This paper proposes a high throughput hash-less chunking method called Rapid Asymmetric Maximum (RAM). Instead of using hashes, RAM uses bytes value to declare the cut points. The algorithm utilizes a fix-sized window and a variable-sized window to find a maximum-valued byte which is the cut point. The maximum-valued byte is included in the chunk and located at the boundary of the chunk. This configuration allows RAM to do fewer comparisons while retaining the CDC property. We compared RAM with existing hash-based and hash-less deduplication systems. The experimental results show that our proposed algorithm has higher throughput and bytes saved per second compared to other chunking algorithms.  相似文献   

10.
根据Rijndael密码的算法结构,构造一个新的5轮相遇区分器:若输入状态的第一个字节可变动,而余下字节固定不变,则通过5轮加密后,算法输出的每个字节差分值均可由输入状态的第一个字节值及25个常量字节以概率2-96确定。基于该区分器,给出一种针对9轮Rijndael-256的中间相遇攻击。分析结果表明,该攻击的数据复杂度约为2128个选择明文数据量,时间复杂度约为2211.6次9轮Rijndael- 256加密。  相似文献   

11.
基于Visual Basic 2008工具,实现了数据包的综合校验与生成,包括累加和、异或、循环冗余和累加求补以及结尾码的处理,而且,为了数据处理更加方便,输入输出采用16进制字符串替代字节。在数据发送时,所有数据包以字节传送。这些技术可以直接应用于通信领域和相关的工程项目。  相似文献   

12.
Pei‐Chi Wu 《Software》1999,29(4):299-309
Allocating small code space to frequent symbols can achieve data compression. The well‐known Huffman coding technique is the optimal method that allocates code space in an integral number of bits. By allocating code space precisely to a fraction of a bit, arithmetic coding achieves a better efficiency than Huffman coding. In arithmetic coding, the range of the code points is usually kept in [1/2, 1) by applying a normalization technique. This paper proposes a method that adjusts the range in [1/2d, 1). The range is allowed to decrease below 1/2 without applying normalization each time. This method reduces the number of shifts in normalization to 1/d. When d=8, normalization can be implemented byte‐by‐byte. This is particularly advantageous to applications where inputs and outputs are handled in bytes. We apply this method both to adaptive coding and semi‐static coding. Timing results on four platforms are presented. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

13.
IP流检测中基于信息熵的哈希算法改进   总被引:1,自引:0,他引:1       下载免费PDF全文
刘许刚  马宏 《计算机工程》2011,37(16):94-97
介绍流检测中常见的哈希算法,从信息熵的角度分析异或移位(XOR-SHIFT)算法。将异或运算扩展到字节,利用区域网络检测中数据包IP地址低字节比高字节变化频繁的特点,通过对称交叉异或运算,将五元组中的信息量尽可能更大化地表现在哈希值中,从而优化哈希算法的散列性能,使后续以流标识为约束进行的IP流处理更高效。  相似文献   

14.
一种新的加密方法及在数字数据版权保护中应用   总被引:1,自引:0,他引:1  
提出了一种新的具有多个私钥的加密方法.这种加密方法的安全性有赖于有限域上分组码译码问题的困难性.根据所提的加密算法,给出了一种适用于数字数据版权盗版跟踪方案.对于每一个数字数据的合法拷贝,给出的跟踪方案都分配一个分组码的码字与该拷贝的解密软件或者相关装置相对应.所提的方案能够有效地提取与解密软件或者装置相对应的码字,并进而利用所用分组码的译码算法找到所有参与非法拷贝制作的合法授权用户,其中,假设参与数字数据非法拷贝的合法用户数不超过所用码的纠错能力.与其他已有方案相比,所提方案具有更好的性能,并且容易实现.所提加密方案的潜在应用领域包括计算机软件、视频/音频等产品的数字数据版权保护方面.  相似文献   

15.
Analytical and experimental evaluations show that ATM adapters can perform quite close to their designed limits, provided that they are used in a properly configured environment with series resources capable of sustaining the desired throughputs. While the media speed may be 155, 100 or 25 Mbps, there are a number of factors that will determine the final achieved maximum throughput observed by a user of a ATM adapter. One is the overhead inherent to ATM, such as the 5 byte ATM header that accompanies every 48 bytes of data sent. Others are inherent in the protocol used in the communication layers above ATM (e.g., pacing and retransmission associated with TCP/IP). Still others are dependent upon the processor speed and operating system used by the adapter host system. This paper discusses those parameters that, based on our experience, can have a considerable impact on the throughput of an ATM adapter. Turboways1 25, 100 and 155 ATM adapter measurements illustrate these issues. These represent end-to-end (application layer-to-application layer) throughput measurements, involving all supporting hardware (workstations, 8260 ATM hub/switch, 8282 ATM concentrator, etc.) and all protocol layers, operating system, etc.This work was supported in part through the 1994 and 1995 IBM SUR grants, the IBM-NCSU ATM partnership effort and NSF award ACS-9418960.  相似文献   

16.
We propose an artificial intelligence membrane to detect network intrusion, which is analogous to a biological membrane that prevents viruses from entering cells. This artificial membrane is designed to monitor incoming packets and to prevent a malicious program code (e.g., a shellcode) from breaking into a stack or heap in a memory. While monitoring incoming TCP packets, the artificial membrane constructs a TCP segment of incoming packets, and derives the byte frequency of the TCP segment (from 0 to 255 bytes) as well as the entropy and size of the segment. These features of the segment can be classified by a data-mining technique such as a decision tree or neural network. If the data-mining method finds a suspicious byte sequence, the sequence is emulated to ensure that it is just a shellcode. If the byte sequence is a shellcode, the sequence is dropped. At the same time, an alert is communicated to the system administrator. Our experiments examined seven data-mining methods for normal and malicious network traffic. The malicious traffic included 114 shellcodes, provided by the Metasploit framework, and including 10 types of metamorphic or polymorphic shellcodes. In addition, real network traffic involving shellcodes was examined. We found that a random forest method outperformed all the other datamining methods and had a very high detection accuracy, including a true-positive rate of 99.6% and a false-positive rate of 0.4%.  相似文献   

17.
基于最小效用的流媒体缓存替换算法   总被引:7,自引:0,他引:7  
提出最小缓存替换算法SCU-K,综合考虑流媒体文件最近K次访问情况,使缓存大小动态适应媒体流行度、字节有用性和已缓存部分大小的变化,降低了文件前缀部分被替换的概率,避免LRU和LFU算法中出现的媒体文件被连续替换的问题。在与LRU,LFU和LRU-2算法的对比实验中,SCU-K算法在提高缓存空间利用率、字节命中率和降低启动延迟方面具有更好的性能。  相似文献   

18.
Chaos block cipher for wireless sensor network   总被引:4,自引:0,他引:4  
New block cipher algorithm in single byte for wireless sensor network with excellence of many cipher algorithms is studied. The child keys are generated through the developed discrete Logistic mapping, and the Feistel encrypting function with discrete chaos operation is constructed. The single byte block is encrypted and decrypted through one turn permutation, being divided into two semi-byte, quadri- Feistel structural operation, and one turn permutation again. The amount of keys may be variable with the turns of Feistel structural operation. The random and security of the child key was proven, and the experiment for the block cipher in wireless sensor network was completed. The result indicates that the algorithm is more secure and the chaos block cipher in single byte is feasible for wireless sensor network.  相似文献   

19.
记R=F2+uF2+u2F2,定义了环R上码字的深度以及R上线性码的深度分布,研究了环R上码字深度的性质,给出了计算环[R]上码字深度的递归算法。利用环R上的线性码C及其生成矩阵,得到了域F2上的线性码C1,Cu,Cu2及相应的生成矩阵。通过域F2上的线性码C1,Cu,Cu2之间的关系,讨论了环R上的线性码的深度谱和深度分布,进而得到R上一类线性码的深度分布。  相似文献   

20.
An external sorting algorithm based on quicksort is presented. The file to be sorted is kept on a disk and only those blocks are fetched into the main memory which are currently needed. At each time, a block is kept in the main memory, if the expected space-time cost of holding it until its next use is smaller than the expected space-time cost of removing it and fetching it again. The efficiency of the algorithm is tested through simulation experiments and the results are compared to those achieved with mergesort in a corresponding environment. The total execution time and the main memory space-time integral are used for measuring the performance.

When equal block sizes are used, external quicksort results in a much smaller average space requirement than mergesort. On the other hand, mergesort is somewhat faster than external quicksort. The main memory space-time integral of quicksort is always considerably smaller than that of mergesort. External quicksort is less sensitive to the block size and to the file size. With faster disks, the performance of external quicksort improves faster than that of mergesort. The relative difference of the algorithms is independent of the file size.

The external quicksort is also analytically compared to some previous external versions of quicksort. It is shown to be satisfied with less space and fewer block fetches than the others.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号