首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
哈希表查找作为一种快速的数据查询算法被广泛应用。为了更好地查找和解决哈希冲突,在构建哈希表时常选用链地址法来解决冲突。由于在查找哈希表时需要遍历链表,大大降低了查找效率。该文在结合链地址法和二分查找的基础上,提出了一种提高哈希表查找效率的改进方法。实验结果表明,该方法降低了冲突时执行查询的查找长度,从而降低了查询所需的时间。  相似文献   

2.
针对基于局部保持投影(locality preserving projection,LPP)的哈希用于图像检索造成图像表征力不强、检索效率低下的问题,融合LPP及主成分分析(principal component analysis,PCA)技术,提出一种随机旋转局部保持哈希的图像检索算法。首先对样本进行PCA降维,对PCA变换矩阵进行随机旋转形成PCA降维矩阵,将原始样本在降维矩阵上进行投影,得到PCA降维样本。为充分利用样本间的相似性结构,对PCA降维样本进行LPP映射,并引入随机矩阵对特征向量进行偏移构造最终编码投影矩阵。再将原始样本投影到编码投影矩阵,得到最终的降维样本;最后对其进行哈希编码,得到有效的二进制编码用于图像检索。算法充分考虑样本间的全局和局部相似性结构,体现了样本间所蕴含的局部和全局信息,把随机旋转应用于PCA降维矩阵,减少了编码之间的量化误差,提高了图像特征的识别能力。分别在3个人脸数据集上进行性能测试实验,并与相关方法进行比较,得到了较好的效果。实验结果表明该方法是有效的。  相似文献   

3.
In order to implement quick and effective search, save the storage space and improve the poor performance of affinity relationshaps between high dimensional data and its codes in image retrieval, a new linear embedding hashing is proposed by introducing the preserving similarity. First, the whole data set is clustered into several classes, and then the similarity predicted function is used to maintain affinity relationships between high dimensional data and its codes so as to establish the objective function. By minimizing the margin loss function, the optimal embedded matrix can be obtained. Compared with the existing classic hashing algorithm, experimental results show that the performance of the linear embedding hash algorithm is superior to the other binary encoding strategy on precision and recall.  相似文献   

4.
At present,most existing cross-modal hashing methods fail to explore the relevance and diversity of different modality data,thus leading to unsatisfactory search performance.In order to solve the above problem,a simple yet efficient deep hashing model is proposed,named deep consistency-preserving hashing for cross-modal retrieval that simultaneously exploits modality-common representation and modality-private representation through the simple end-to-end network structure,and generates compact and discriminative hash codes for multiple modalities.Compared with other deep cross-modal hashing methods,the complexity and computation of the proposed method can be neglected with significant performance improvements.Comprehensive evaluations are conducted on three cross-modal benchmark datasets which illustrate that the proposed method is superior to the state-of-the-art cross-modal hashing methods.  相似文献   

5.
This paper proposes a novel algorithm for solving the problem of data linear inseparable and low-accuracy in the image retrieval field. In order to get hash codes, the algorithm takes account of kernel trick and iterative quantization. First, the kernel trick is used to map the image data from low-dimension into high-dimension cleverly. In this way the data become linearly separable, and the trained hash codes are proved to be effective. Second, in the process of training the hash function, iterative quantization is used to quantize the image data to the closest hash codes. Finally, the quantitative error is minimized, and the hash codes are generated for image retrieval. Experimental results show that it certainly outperforms other compared hashing algorithms on two image benchmarks.  相似文献   

6.
The homomorphic hash algorithm (HHA) is introduced to help on-the-fly verify the vireless sensor network (WSN) over-the-air programming (OAP) data based on rateless codes. The receiver calculates the hash value of a group of data by homomorphic hash function, and then it compares the hash value with the receiving message digest. Because the feedback channel is deliberately removed during the distribution process, the rateless codes are often vulnerable when they face security issues such as packets contamination or attack. This method prevents contaminating or attack on rateless codes and reduces the potential risks of decoding failure. Compared with the SHA1 and MD5, HHA, which has a much shorter message digest, will deliver more data. The simulation results show that to transmit and verify the same amount of OAP data, HHA method sends 17.9% to 23.1% fewer packets than MD5 and SHA1 under different packet loss rates.  相似文献   

7.
提出了一种可用于CDMA独立地址分配的快速分配算法.它与传统的地址码空间搜索算法具有下列不同:采用二分法技术从初始地址码开始生成其他的地址码;为了保证地址码之间的相互独立性,在每次的生成过程中地址码有一半位数的取值发生变化,对每次生成的多个子地址码保持前面一半位数的取值相同;整个地址码组成二叉树型结构.该算法也适用于其他的具有数据独立性要求的场合.  相似文献   

8.
针对复杂发票任意区域下的特定表格内容提取与实时识别问题,提出了一种基于Tesseract-OCR引擎的自适应识别方法.首先利用OpenCV对发票图像进行预处理滤波、自适应阈值等一系列预处理得到二值图像;然后利用形态学中的开运算提取表格全域线段,进行表格位置提取,并结合表格交点坐标与自定义模板,实现表头与内容自适应适配;最后利用jTessBoxEditor对表格区域内容进行字库训练优化,最终实现基于Tesseract-OCR的字符识别.实验结果表明该方法具有高准确识别率,支持感兴趣区域自适应识别,具备高可用性.  相似文献   

9.
目前密码分析者已经可以在较短的时间内有效找到MD5、SHA1等国际密码杂凑算法的碰撞,通过熵增来增强杂凑值的随机性是提高密码杂凑算法抗碰撞性的有效途径,因此提出一种将纠错码和SM3算法迭代结构融合的改进方案。首先,基于纠错码的线性性质和最小汉明距离最大化原则,选择拟阵理论所构建的二进制线性分组码,计算出其系统形式的生成矩阵,并通过循环移位来消除比特之间的规律,并计算最终产生的有效码字;其次,在线性分组码中遵循周期性原则选取最优码字来构建初始常量值,并将其赋值于初始寄存器中,同时在迭代结构中引入初始寄存器构成算法的压缩函数,完成杂凑算法迭代结构的二次构建;最后,考虑杂凑值信息熵对算法混乱度的评估能力,将提出的方案和2种现有公开的国际密码杂凑算法进行对比实验,同时进行算法效率、内存损耗以及雪崩效应测试并进行综合评价。实验结果表明,本文方案在不改变运算效率的前提下具有稳定的雪崩效应,运行过程中的内存损耗相比SM3算法降低0.01~0.07MB,同时杂凑值的信息熵值高于其他两类对比算法。表明提出的基于纠错码的改进方案能够通过熵增证明杂凑值比特之间的随机性更高,更好实现隐藏明文和杂凑值之间统计信息的目的,提高了密码杂凑算法的安全性。  相似文献   

10.
Lung medical image retrieval based on content similarity plays an important role in computer-aided diagnosis of lung cancer. In recent years, binary hashing has become a hot topic in this field due to its compressed storage and fast query speed. Traditional hashing methods often rely on high-dimensional features based hand-crafted methods, which might not be optimally compatible with lung nodule images. Also, different hashing bits contribute to the image retrieval differently, and therefore treating the hashing bits equally affects the retrieval accuracy. Hence, an image retrieval method of lung nodule images is proposed with the basis on convolutional neural networks and hashing. First, a pre-trained and fine-tuned convolutional neural network is employed to learn multi-level semantic features of the lung nodules. Principal components analysis is utilized to remove redundant information and preserve informative semantic features of the lung nodules. Second, the proposed method relies on nine sign labels of lung nodules for the training set, and the semantic feature is combined to construct hashing functions. Finally, returned lung nodule images can be easily ranked with the query-adaptive search method based on weighted Hamming distance. Extensive experiments and evaluations on the dataset demonstrate that the proposed method can significantly improve the expression ability of lung nodule images, which further validates the effectiveness of the proposed method.  相似文献   

11.
基于QR码的特点和伴随式的重量,给出了二进制QR码的一个新的简化查表译码算法。译码表的行是形如( e,eH )的向量,其中 e 是错误仅出现在信息部分且错误个数不超过码的纠错能力一半的错误模式, eH 是 e 的伴随式。该算法适用于所有的二进制QR码。其译码表的行数在目前已知的二进制QR码的查表译码算法中是最小的。因此该算法不仅有一定的理论意义,也有一定的实用价值。  相似文献   

12.
为了解决服务器的单点失效问题,使用分布式哈希表(distributed Hash table,DHT)网络完成了证书的存储与分发的功能.DHT网络的强大健壮性可以为证书的存储、分发和撤销提供一个稳定的存储平台.在这样的环境中,即使服务器出现工作异常,用户仍可获得并验证所需的证书.  相似文献   

13.
针对已有Web事务识别模型的缺点,提出一种识别Web事务的新模型———IPRC模型.该模型根据主索引页上的引用以及文档目录结构将网页分类,并以此作为识别Web事务的依据.在此基础上提出了一种挖掘频繁访问模式的算法WDHP,该算法继承了DHP算法使用hash树过滤候选集以及裁剪数据库的基本方法,并以访问路径树的方式将数据库存储于内存,在内存中完成后继的挖掘,不仅减少了扫描数据库的次数,而且大大降低了算法的时间复杂性.实验表明WDHP算法不仅优于DHP算法,而且也优于典型的基于内存的WAP算法.  相似文献   

14.
基于Zernike矩的抗旋转攻击图像感知哈希算法   总被引:1,自引:0,他引:1  
以Zernike矩为特征图像感知哈希算法,由于Zernike矩对图像旋转具有不变性,使得算法具备了旋转攻击下的鲁棒性;同时由于Zernike矩是图像的正交表示,能够很好地提取图像的内容,使得算法具有良好的区分性.算法首先将图像归一化,然后提取图像Zernike矩作为特征,经过密钥置乱后,对特征进行量化生成哈希串.算法在...  相似文献   

15.
简述了P2P网络及其结构模型.在P2P网络中,如何快速准确地定位资源是一个重要的问题.文章分析了目前比较流行的基于分布式哈希表(DHT)的Chord算法,并针对定时更新方案造成的延时问题,采用了事件驱动的方案.  相似文献   

16.
采用风洞试验,结合有限元分析方法,研究输电线顺线路方向的风荷载和作用模式;研发弧形输电线风荷载的风洞试验测试装置;获得2种典型垂跨比输电线的顺线路方向比例系数;通过2个实例对比输电线顺线路方向风荷载的3种分配模式,给出输电线顺线路方向风荷载和分配模式的建议. 研究表明,各国规范中只有中国规范给出了顺线路方向风荷载的规定;垂跨比为4%和8%的输电线顺线向荷载比例系数均低于0.15,中国规范取值(0.25)偏保守;按投影高度和按规范中拉索体型系数的分配结果基本一致;按弧长分配会比按其他2种方法获得更大的竖向位移和水平位移,增大幅度约为12%;顺线路方向的比例系数建议取0.10(常规输电线)或0.12(大跨越输电线);顺线路方向的风荷载建议采用按投影高度进行分配.  相似文献   

17.
Current recognition methods are mainly aimed at primitive BCH codes. To solve this problem, a novel recognition method based on soft decision is proposed for binary shortened BCH codes. According to the soft decision information, an analysis matrix is established by the hard decision sequence. The Gauss elimination algorithm is applied to the matrix, and a binary hypothesis test is built to recognize the code length. Then, a primitive BCH code is constructed, and a parity-check matrix is tested under different primitive polynomials by using the soft decision information. Finally, the primitive polynomial and generator polynomial are recognized according to the roots distribution of the generator polynomial. The proposed method is effective for both shortened BCH codes and primitive BCH codes. Simulations verify the applicability of the proposed method. The recognition results of primitive BCH codes show that the proposed method performs better than the conventional recognition methods.  相似文献   

18.
针对传统脆弱性代码复用检测技术漏报率高的问题,提出基于漏洞指纹的检测方法.分析开源项目漏洞补丁的结构与脆弱性代码特征,总结代码复用过程中常见修改手段的特点,设计基于哈希值的漏洞指纹模型.开展代码预处理消除无关因素的影响,选取固定行数的代码块作为特征抽象粒度,利用哈希算法抽取关键代码特征.通过搜集开源项目漏洞信息与相关代码片段构建漏洞样本库,利用基于LCS的相似性评估算法定位漏洞样本的复用并且标记为敏感代码,使用漏洞指纹进行检测并根据识别策略完成对脆弱性代码的判定.实验结果表明,基于漏洞指纹的检测方法能够有效地应对多种代码修改手段的影响,明显提高检测效率,检测时间与输入代码量呈线性增长关系.  相似文献   

19.
In order to increase the embedding capacity of the VQ-based data hiding scheme, a lossless data hiding method based on the state-codebook sorted SMVQ-index classified coding is proposed. First, when doing SMVQ compression, the proposed SCS method is used to reduce the indices values and increase the embeddable indices percentage. Then according to the indices' distribution characteristic, the indices less than eight are classified into four cases. Finally, the secret data are embedded into the code stream of these indices. Experimental results indicate that, with the same image quality as that of VQ, the embedding capacity of the proposed scheme is superior to some that of previous schemes. Compared with the lossless data hiding scheme based on SMVQ-index residual value coding and the ASCM method, the embedding efficiency is increased by 4.5% and 48.8% respectively.  相似文献   

20.
针对ASIG协议的不足, 提出了基于二叉扫描树的电调天线设备扫描算法. 叶扫描阶段重用上一轮扫描的对应可读周期的扫描码, 直接对可读结点进行扫描识别; 根扫描阶段从根结点开始扫描, 对新增设备扫描识别. 通过利用自适应冲突避免机制和叶-根两阶段扫描方法, 有效地减少了扫描过程中冲突次数. 仿真实验表明, 该算法能有效地减少电调天线设备扫描时间, 提高了设备扫描识别效率.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号