首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 169 毫秒
1.
给出一种标量的串长加法链算法来提高椭圆曲线标量乘法的效率。新的标量乘算法结合底层域直接计算2Q+P、2^nR+S算法,使用大小窗口法将串长算法和滑动窗口算法结合,加法链长度、存储空间和预计算都减少,其效率比二元法提高53%,比NAF法提高47.5%,比串长算法提高46.2%,比Windows法提高42.2%。  相似文献   

2.
提出一种新大数模幂与点乘m_ary算法中窗口大小的最优化估计方法。该方法不同于传统的暴力搜寻方法,也不同于在窗口的取值范围内通过逐一测试程序来获得最优窗口大小的方法。其基于以下理论分析:模幂 m_ary算法的基本运算为大数乘法,其中包括大数平方算法和一般大数乘法;椭圆曲线加密算法中点乘的m_ary算法步骤与模幂的m_ary算法相同,后者的基本运算为倍乘和加法。根据m_ary算法的基本运算的调用次数,推算出了最优窗口大小的估计公式。通过实验对m_ary算法进行实现,并测试分析了根据估计公式计算出窗口大小的算  相似文献   

3.
提出一种新大数模幂与点乘m_ary算法中窗口大小的最优化估计方法.该方法不同于传统的暴力搜寻方法,也不同于在窗口的取值范围内通过逐一测试程序来获得最优窗口大小的方法.其基于以下理论分析:模幂 m_ary算法的基本运算为大数乘法,其中包括大数平方算法和一般大数乘法;椭圆曲线加密算法中点乘的m_ary算法步骤与模幂的m_ary算法相同,后者的基本运算为倍乘和加法.根据m_ary算法的基本运算的调用次数,推算出了最优窗口大小的估计公式.通过实验对m_ary算法进行实现,并测试分析了根据估计公式计算出窗口大小的算法实现时间效率与理论分析基本吻合.  相似文献   

4.
理论上 TCP窗口的和式增加积式减少的算法可以使拥塞窗口的大小收敛到一个理想的状态 ,且不同的结点可以公平共享带宽 .然而实验和分析表明 :TCP在不同的路由路径中是不会共享公平的连接的 .本文用实验验证了这种不公平性 ,并用一个算法消除了这种不公平性 .这个算法的思想就是对所有的 TCP连接能够找到一个共同的更新时间 .这样它们就可以以相同的速率去更新它们的窗口 ,从而消除了它的不公平性  相似文献   

5.
针对椭圆曲线密码体制中标量乘与多标量乘运算耗时过长的问题,设计以2、3、7为基元的多基整数表示方法,并结合多基数系统(MBNS)及滑动窗口算法,提出基于MBNS滑动窗口(Sliding MBNS)和交错MBNS滑动窗口(I-MBNS)的多标量乘快速算法,分析并比较两种多标量乘快速算法在二元域和素域及不同窗口宽度下的平均...  相似文献   

6.
王晓媛  梁丰  徐磊  蒋燕荣  凌璁 《计算机应用》2006,26(3):537-0539
研究了无线局域网的接入点(Access Point, AP)缓存区大小对TCP公平性的影响,并通过分析平均窗口与平均丢包率以及使用窗口与平均丢包率的关系,加上实现TCP公平性时的窗口限制条件,提出一种计算TCP上下行流公平时所需的AP缓存大小的方法。通过NS2仿真,验证了算法的准确性。  相似文献   

7.
高度优先加法链m元标量速乘算法研究   总被引:1,自引:0,他引:1  
为提高标量点乘在移动计算设备中的运算效率,并增强其计算安全性,比较分析了加法链方法、m-ary方法等标量点乘方法的执行过程和加速性能特征,提出了基于高度优先加法链和自由窗口宽度的Improved-m-ary的标量点乘方法。分析结论和实验仿真数据表明,该方法能有效减轻标量平均汉明重量,降低标量点乘计算量,内嵌的窗口值杂乱化机制使得其针对旁路信道分析攻击表现出出色的免疫力。  相似文献   

8.
提出了一种在接收端模拟TCP行为的组播拥塞控制协议TEARM。该协议在每个接收端独立地维护拥塞窗口并模拟TCP协议来改变窗口大小,其后将窗口值转换为期望速率,反馈给发送端,其中反馈的速率为一段时间内的加权平均。此外,使用了基于代表的机制来抑制反馈,并使用了历史打折机制来提高协议的响应性。仿真表明,该协议具有良好的TCP公平性、速率平滑性、可扩展性以及响应性,适用于流媒体组播业务的传输。  相似文献   

9.
研究网络拥塞优化控制问题.针对网络承载量的不断增加,使得网络传输效率降低.传统网络拥塞控制算法要求系统根据无线网络的容量实时编号,动态调整TCP拥塞窗口的大小,难于建立准确的数学模型,从而导致网络带宽利用率低,网络拥塞严重.为了降低网络拥塞的概率,提出了一种改进的无线TCP拥塞控制算法.算法主要是集中解决在TCP拥塞窗口的大小调整问题上,首先利用BP神经网络对参数进行训练,有效地解决了TCP拥塞窗口大小的调整,从而实现了拥塞避免、快速重传和快速恢复机制,改善网络性能.实验结果表明改进的算法提高了网络平均吞吐量,带宽利用率更高,有效避免了网络拥塞.  相似文献   

10.
无线自组网络中TCP流公平性的分析与改进   总被引:3,自引:2,他引:3  
张磊  王学慧  窦文华 《软件学报》2006,17(5):1078-1088
研究了TCP(transmission control protocol)流在多跳无线自组网络中的公平性问题,发现IEEE802.11DCF协议在此环境下会导致严重的不公平性,即部分节点垄断了网络带宽而其他节点被饿死.首先,通过仿真分析了产生TCP流不公平性的原因,指出其根源在于MAC(media access and control)协议的不公平性,同时,TCP的超时机制加剧了不公平性的产生;然后,利用概率模型定量分析了TCP不公平性与MAC协议参数之间的关系,发现TCP流的公平性与TCP报文长度直接相关,并且增加MAC协议初始竞争窗口的大小能够有效提高公平性.据此,提出了一种根据TCP报文长度动态调节初始回退窗口大小的自适应回退MAC协议改进算法.理论分析和仿真表明,该算法在很大程度上可以有效缓解不公平性问题的产生,并且不会引起网络吞吐量的严重降低.  相似文献   

11.
Analysis of MIMD congestion control algorithm for high speed networks   总被引:1,自引:0,他引:1  
E.  K.  C.  A.A.  B.J.   《Computer Networks》2005,48(6):972-989
Proposals to improve the performance of TCP in high speed networks have been recently put forward. Examples of such proposals include High Speed TCP, Scalable TCP, and FAST. In contrast to the additive increase multiplicative decrease algorithm used in the standard TCP, Scalable TCP uses a multiplicative increase multiplicative decrease (MIMD) algorithm for the window size evolution. In this paper, we present a mathematical analysis of the MIMD congestion control algorithm in the presence of random losses. Random losses are typical to wireless networks but can also be used to model losses in wireline networks with a high bandwidth-delay product. Our approach is based on showing that the logarithm of the window size evolution has the same behaviour as the workload process in a standard G/G/1 queue. The Laplace–Stieltjes transform of the equivalent queue is then shown to directly provide the throughput of the congestion control algorithm and the higher moments of the window size. Using ns-2 simulations, we validate our findings using Scalable TCP.  相似文献   

12.
比较陆地集群无线电系统(TETRA)集群网络中加增乘减控制机制和按比公平时序调度的优缺点,指出TCP的加增乘减机制会降低TETRA集群系统的QoS性能。提出一种新的TETRA集群队列管理方法,设计基于动态起点机制的队列管理计数器,基于平均信道质量进行队列管理。仿真实验验证了该方法在TETRA集群网络中的有效性。  相似文献   

13.
This paper investigates the necessary features of an effective clause weighting local search algorithm for propositional satisfiability testing. Using the recent history of clause weighting as evidence, we suggest that the best current algorithms have each discovered the same basic framework, that is, to increase weights on false clauses in local minima and then to periodically normalize these weights using a decay mechanism. Within this framework, we identify two basic classes of algorithm according to whether clause weight updates are performed additively or multiplicatively. Using a state-of-the-art multiplicative algorithm (SAPS) and our own pure additive weighting scheme (PAWS), we constructed an experimental study to isolate the effects of multiplicative in comparison to additive weighting, while controlling other key features of the two approaches, namely, the use of pure versus flat random moves, deterministic versus probabilistic weight smoothing and multiple versus single inclusion of literals in the local search neighbourhood. In addition, we examined the effects of adding a threshold feature to multiplicative weighting that makes it indifferent to similar cost moves. As a result of this investigation, we show that additive weighting can outperform multiplicative weighting on a range of difficult problems, while requiring considerably less effort in terms of parameter tuning. Our examination of the differences between SAPS and PAWS suggests that additive weighting does benefit from the random flat move and deterministic smoothing heuristics, whereas multiplicative weighting would benefit from a deterministic/probabilistic smoothing switch parameter that is set according to the problem instance. We further show that adding a threshold to multiplicative weighting produces a general deterioration in performance, contradicting our earlier conjecture that additive weighting has better performance due to having a larger selection of possible moves. This leads us to explain differences in performance as being mainly caused by the greater emphasis of additive weighting on penalizing clauses with relatively less weight.  相似文献   

14.
In this paper, the problems of stochastic robust approximate covariance assignment and robust covariance feedback stabilization, which are applied to variable parameters of additive increase/multiplicative decrease (AIMD) networks, are considered. The main idea of the developed algorithm is to use the parameter settings of an AIMD network congestion control scheme, where parameters may assign the desired network’s window covariance, with respect to the current network conditions. The aim is to search for the optimal AIMD parameters of a feedback gain matrix such that the objective functions defined via appropriate robustness measures and covariance assignment constraints can be optimized using an adaptive genetic algorithm (AGA). It is shown that the results can be used to develop tools for analyzing the behavior of AIMD communication networks. Quality of service (QoS) and other performance measures of the network have been improved by using the proposed congestion control. The accuracy of the controller is demonstrated by using MATLAB and NS software programs.  相似文献   

15.
改进的二分法查找   总被引:4,自引:0,他引:4  
王海涛  朱洪 《计算机工程》2006,32(10):60-62,118
当前有很多的查找算法,其中在对有序数列的查找算法中二分法查找(binary search)是最常用的。利用二分法,在含有n个元素的有序数列中查找一个元素的最大比较次数为[logn]+1。在很多情况中,在查找之前有序数列分布的很多信息为已知,比如说如果知道了有序数列中每相邻两个元素之差的最大值的一个上界,就可以有比二分法更加有效的查找算法。文章给出了一个称之为改进的二分法查找算法。改进的二分法查找性能明显优于二分法查找,受数列分布的影响,其最坏情况下查找一个元素的最大比较次数在1和[logn]+1之间,明显优于二分查找的[logn]+1。在实际应用中利用改进的二分法可以极大地提高查找效率。  相似文献   

16.
基于相关系数研究了在一类非线性神经网络系统中加性和乘性噪声作用下的阈上随机共振现象。仅在加性噪声或者乘性噪声的作用下,对每一个固定的系统阈值,加性噪声下的阈上随机共振比乘性噪声下的阈上随机共振更容易发生,且相关系数所达到的峰值也比在乘性噪声下的峰值大,这说明加性噪声更有利于改善信号的相关性。系统阈值的增加会降低阈上随机共振的功效;而阈值单元数目的增多,会提高阈上随机共振的功效。加性和乘性噪声共同作用下的阈上随机共振现象同样存在,对系统阈值进行恰当选取和增加系统阈值单元数目使得阈上随机共振现象更加明显;给定乘性噪声而改变加性噪声比固定加性噪声而改变乘性噪声阈上随机共振更容易发生,且功效更好。  相似文献   

17.
为解决井下人员定位系统中多个标签向接收器发送信息时产生的数据碰撞问题,提出了一种改进的二进制指数退避算法。该算法采用乘法增加、线性减小的碰撞窗口调整方式,设定了两个阈值,并根据不同网络流量制定了不同的退避发生器值更新规则,同时同步更新优化窗口值,使标签能够自适应快速接入信道。测试表明,改进后的算法最大并发识别数量为150,最大位移速度为10m/s,均优于经典的二进制指数退避算法。该算法提高了数据传输率,减少了漏卡率,有效地解决了井下多目标识别的防碰撞问题。  相似文献   

18.
The working-set bound [Sleator and Tarjan in J. ACM 32(3), 652–686, 1985] roughly states that searching for an element is fast if the element was accessed recently. Binary search trees, such as splay trees, can achieve this property in the amortized sense, while data structures that are not binary search trees are known to have this property in the worst case. We close this gap and present a binary search tree called a layered working-set tree that guarantees the working-set property in the worst case. The unified bound [B?doiu et al. in Theor. Comput. Sci. 382(2), 86–96, 2007] roughly states that searching for an element is fast if it is near (in terms of rank distance) to a recently accessed element. We show how layered working-set trees can be used to achieve the unified bound to within a small additive term in the amortized sense while maintaining in the worst case an access time that is both logarithmic and within a small multiplicative factor of the working-set bound.  相似文献   

19.
A typical class of structures to organize ordered files is multiway trees, among which the most widely used is the perfectly balanced B-tree. In this paper we present the new family of BMT multiway trees, which are kept balanced in height, similarly to the classical binary height balanced trees used in central memory. The height of a BMT, that is the maximum search length for a key, is shown to be a logarithmic function of the number of keys in the worst case. Updating a BMT by key insertion is studied, and a technique to keep the tree balanced is presented. A comparison between the performance of BMTs and B-trees leads to the conclusion that the two structures are roughly comparable as to search length for a key, while BMTs require less memory space than B-trees for small node sizes. The real difference between BMTs and B-tree is in rebalancing operation, which requires a work proportional to the node size in the BMTs and to the tree height in the B-tree.  相似文献   

20.
Execution times of five strategies of binary search with variable-length keys are experimentally evaluated. For comparison, the experiments include also slide cubic search which has been recommended to be used in searching index pages of data base management systems. The results show that, in the environment of these experiments, it is possible to organize variable-length binary search clearly more efficiently than slide root search or slide binary search. The fastest version of variable-length binary search was only slightly slower than the usual fixed-length binary search implemented for keys padded to the maximum length of the range used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号