首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
黄影 《电子科技》2013,26(10):73-75
基于自顶向下的后缀树建立思想,提出一种分步建立后缀树的方法。首先对字符串中所有后缀按照字母表顺序进行排序,然后求出有序相邻后缀之间的最长公共前缀,并根据后缀顺序和最长公共前缀建立后缀树。该方法无需使用后缀链,并且可以在线性时间建立后缀树。  相似文献   

2.
应用Variable-Tail编码压缩的测试资源划分方法   总被引:13,自引:6,他引:13       下载免费PDF全文
测试资源划分是降低测试成本的一种有效方法.本文提出了一种新的有效的对测试数据进行压缩的编码:Variable-Tail编码,并构建了基于该编码的测试资源划分方案.文章的理论分析和实验研究表明了采用Variable-Tail编码能取得比Golomb编码更高的压缩率,针对多种模式下的测试向量均能提供很好的压缩效果,解码器的硬件也较易实现.文章还提出了一种整合不确定位动态赋值的测试向量排序算法,该算法可以进一步提高测试压缩率.文章最后用实验数据验证了所提编码和排序算法的高效性.  相似文献   

3.
《现代电子技术》2016,(17):112-115
设计并实现了中文领域术语抽取系统,提出了基于前后缀的中文领域术语抽取算法,该算法独立于具体的领域,并且对包含常用前后缀的领域术语具有较好的抽取效果;通过对领域本体构建中关键技术的研究,提出了适用于不同领域的领域本体半自动构建方法;最后,数字图书馆气候变化领域本体的成功构建验证了基于多源数据的领域本体半自动构建方法的有效性,并且该方法可以很方便地移植到其他领域。  相似文献   

4.
基于多核多线程的排序算法优化和实现   总被引:1,自引:1,他引:0  
根据各多线程技术和多核特点,提出了归并排序基于多线程的改进方法.并采用各排序算法组合的方法,使用WIN32 API多线程编程方法实现了新的算法.实验结果表明该算法比传统算法效率有很大提升.  相似文献   

5.
付卫红  赵祎晨 《信号处理》2023,39(2):335-342
卷积混合盲源分离可以在频域得到有效解决,但频域盲分离必须要解决排序模糊性问题。本文提出了一种基于性能权重聚类的频域盲分离排序算法,该算法利用聚类来得到顺序参考,对各频点上分离信号的准确性进行计算,根据分离结果的准确性予以不同频点不同的聚类权重,从而提高聚类结果的可靠性。通过对频点进行分段处理可以有效抑制排序错误的传播,提高算法性能。最后通过多组仿真实验验证了基于性能权重聚类的频域盲分离排序算法的普适性与性能上的优越性,同时也探究了接收端个数对算法性能的影响。仿真结果表明本文提出的基于性能权重聚类的频域盲分离排序算法相较于传统的幅度相关性排序算法在信干比上会有2 dB左右的提升。接收天线数越多,算法分离性能越好。  相似文献   

6.
基于FPGA硬件技术,以空间换时间的思路,提出了一种并行全比较的排序算法。该算法通过对数据的并行全比较,计算出每个数据在排序中的位置实现数据排序。该算法可在4个时钟周期内实现数字序列的排序,通过实验证明,实时性好,通用性强。  相似文献   

7.
基于项权值排序挖掘的跨语言查询扩展   总被引:1,自引:0,他引:1       下载免费PDF全文
黄名选  蒋曹清 《电子学报》2020,48(3):568-576
为了改善自然语言处理应用中长期存在的主题漂移和词不匹配问题,本文首先提出一种加权项集支持度计算方法和基于项权值排序的剪枝方法,给出面向查询扩展的基于项权值排序的加权关联规则挖掘算法,讨论关联规则混合扩展、后件扩展和前件扩展模型,最后提出基于项权值排序挖掘的跨语言查询扩展算法.该算法采用新的支持度和剪枝策略挖掘加权关联规则,根据扩展模型从规则中提取高质量扩展词实现跨语言查询扩展.实验结果表明,与现有基于加权关联规则挖掘的跨语言扩展算法比较,本文扩展算法能有效遏制查询主题漂移和词不匹配问题,可用于各种语言的信息检索以改善检索性能,扩展模型中后件扩展获得最优检索性能,混合扩展的检索性能不如后件扩展和前件扩展,支持度对后件扩展更有效,置信度更有利于提升前件扩展和混合扩展的检索性能.本文挖掘方法可用于文本挖掘、商务数据挖掘和推荐系统以提高其挖掘性能.  相似文献   

8.
RPR的关键技术是同时实现带宽的高利用率,空间重组和带宽分配的公平性。在IEEE802.17工作组制定的RPR草案中已经提出了公平算法,但还存在一些待完善的问题,比如:长时间的收敛问题以及非平衡流环境下的振荡问题。针对这些问题,提出了一种新的基于GPS的带宽公平算法。通过结合队列排序以及RPR草案中反馈机制的原理,达到有效改善RPR性能的目的。理论分析和仿真结果表明,该算法不仅能有效消除非平衡流向问题,实现带宽的公平分配,并且能实现带宽利用率的最大化以及实现资源重组。  相似文献   

9.
通常以词或字符为单位构造后缀树进行代码检测,空间开销大,同时增加字符串对比数量。针对该问题,文章设计了一种基于后缀树的代码相似度检测方法,应用Rabin指纹算法以句子为单位生成的指纹序来构造后缀树,并结合RMQ提取后缀树指纹公共子串长度,以此计算出代码的相似度。  相似文献   

10.
针对当前电信网中如何有效刻画含权网络的真实特征,完善和发展相关复杂网络模型的难题,特别是对通信社区检测结果层次结构不清晰及运算复杂度高的问题,从复杂网络特征分析入手,设计了一种新的通信社区检测算法。该算法基于通信强度排序方法实现通信社区的有效检出,基于通信密度分布生成高分辨率层次嵌套树,通过距离矢量修剪嵌套树,实现社区稳定检测和层次结构分析同时降低计算复杂度。该算法使用真实网络数据进行了有效验证。  相似文献   

11.
In this correspondence, we present a new universal entropy estimator for stationary ergodic sources, prove almost sure convergence, and establish an upper bound on the convergence rate for finite-alphabet finite memory sources. The algorithm is motivated by data compression using the Burrows-Wheeler block sorting transform (BWT). By exploiting the property that the BWT output sequence is close to a piecewise stationary memoryless source, we can segment the output sequence and estimate probabilities in each segment. Experimental results show that our algorithm outperforms Lempel-Ziv (LZ) string-matching-based algorithms.  相似文献   

12.
天气雷达回波数据的压缩实验   总被引:1,自引:1,他引:0  
针对网络天气雷达系统中回波数据的数据量大,难以通过国内窄带通讯网络传输的问题,本文提出了一种天气雷达回波数据的压缩方案,先对数据进行小波变换,然后对变换系数进行均匀量化,再通过无损编码,结合BWT(Burrows Wheeler Transformation)的压缩速度和PPMD(Predictionby Partial Matching)的压缩效率,使得数据的压缩率达到10倍以上。同时比较了一维、二维小波变换对算法性能的影响。结果表明,算法能在达到一定压缩比的同时满足信噪比的要求。  相似文献   

13.
14.
Given an AWGN channel, we look at the problem of designing an optimal binary uncoded communication system for transmitting blocks of binary symbols generated by a stationary source with memory modelled by a Markov chain (MC) or a hidden Markov model (HMM). The goal is to minimize the average SNR required for a given block error rate. The particular case where the binary source is memoryless with nonuniform symbol probabilities has been studied by Korn et al. [Optimal binary communication with nonequal probabilities. IEEE Trans Commun 2003;51:1435–8] [1] by optimally allocating the energies of the transmitted signals. In this paper we generalize the previous work to include the important case of sources with memory. The proposed system integrates the block sorting Burrows Wheeler Transform (BWT, [Burrows M, Wheeler D. A block sorting lossless data compression algorithm. Research report 124. Digital Systems Center, 1994]) [2] with an optimal energy allocation scheme based on the first order probabilities of the transformed symbols. Analytical expressions are derived for the energy gain obtained with the proposed system when compared either with the optimal blockwise MAP receiver or with a standard source coded system consisting of an optimal source encoder followed by an optimal uncoded binary communication system, i.e. by a symbol-by-symbol MAP detector.  相似文献   

15.
基于内存优化的小波零块嵌入图像编码算法   总被引:1,自引:0,他引:1       下载免费PDF全文
王娜  李霞 《电子学报》2006,34(11):2068-2071
小波嵌入零块编码算法(Set Partitioned Embedded bloCK,SPECK)是一种高效的具有渐进传输特性的图像编码算法,但其在编解码过程中的巨大内存占用致使编解码速度慢且不利于硬件实现.本文提出了一种基于内存优化的小波零块嵌入图像编码算法,采用标志状态图和块深度优先搜索策略完成嵌入编码中的排序和细化过程.两张标志状态图分别标识编解码过程中的重要系数和不重要集合,同时结合块深度优先搜索策略检索块结构中的不重要集合,代替排序和细化过程中的重要系数链表和不重要集合链表,较大地节省了内存占用.实验结果表明,与SPECK算法相比,在保持相当信噪比的情况下,内存占用仅为原来的1/12,与另一种低内存零树编码算法LZC(Listless Zerotree Coding)相比,在内存略有增加的情况下,信噪比至少提高1.1dB,为硬件实现小波零块编码算法探讨了一条有效途径.  相似文献   

16.
This paper introduces a new data compression algorithm. The goal underlying this new code design is to achieve a single lossless compression algorithm with the excellent compression ratios of the prediction by partial mapping (PPM) algorithms and the low complexity of codes based on the Burrows Wheeler Transform (BWT). Like the BWT-based codes, the proposed algorithm requires worst case O(n) computational complexity and memory; in contrast, the unbounded-context PPM algorithm, called PPM*, requires worst case O(n2) computational complexity. Like PPM*, the proposed algorithm allows the use of unbounded contexts. Using standard data sets for comparison, the proposed algorithm achieves compression performance better than that of the BWT-based codes and comparable to that of PPM*. In particular, the proposed algorithm yields an average rate of 2.29 bits per character (bpc) on the Calgary corpus; this result compares favorably with the 2.33 and 2.34 bpc of PPM5 and PPM* (PPM algorithms), the 2.43 bpc of BW94 (the original BWT-based code), and the 3.64 and 2.69 bpc of compress and gzip (popular Unix compression algorithms based on Lempel-Ziv (LZ) coding techniques) on the same data set. The given code does not, however, match the best reported compression performance-2.12 bpc with PPMZ9-listed on the Calgary corpus results web page at the time of this publication. Results on the Canterbury corpus give a similar relative standing. The proposed algorithm gives an average rate of 2.15 bpc on the Canterbury corpus, while the Canterbury corpus web page gives average rates of 1.99 bpc for PPMZ9, 2.11 bpc for PPM5, 2.15 bpc for PPM7, 2.23 bpc for BZIP2 (a popular BWT-based code), and 3.31 and 2.53 bpc for compress and gzip, respectively  相似文献   

17.
The method of modeling and ordering in wavelet domain is very important to design a successful algorithm of embedded image compression. In this paper, the modeling is limited to "pixel classification," the relationship between wavelet pixels in significance coding. Similarly, the ordering is limited to "pixel sorting," the coding order of wavelet pixels. We use pixel classification and sorting to provide a better understanding of previous works. The image pixels in wavelet domain are classified and sorted, either explicitly or implicitly, for embedded image compression. A new embedded image code is proposed based on a novel pixel classification and sorting (PCAS) scheme in wavelet domain. In PCAS, pixels to be coded are classified into several quantized contexts based on a large context template and sorted based on their estimated significance probabilities. The purpose of pixel classification is to exploit the intraband correlation in wavelet domain. Pixel sorting employs several fractional bit-plane coding passes to improve the rate-distortion performance. The proposed pixel classification and sorting technique is simple, yet effective, producing an embedded image code with excellent compression performance. In addition, our algorithm is able to provide either spatial or quality scalability with flexible complexity.  相似文献   

18.
A low memory zerotree coding for arbitrarily shaped objects   总被引:2,自引:0,他引:2  
The set partitioning in hierarchical trees (SPIHT) algorithm is a computationally simple and efficient zerotree coding technique for image compression. However, the high working memory requirement is its main drawback for hardware realization. We present a low memory zerotree coder (LMZC), which requires much less working memory than SPIHT. The LMZC coding algorithm abandons the use of lists, defines a different tree structure, and merges the sorting pass and the refinement pass together. The main techniques of LMZC are the recursive programming and a top-bit scheme (TBS). In TBS, the top bits of transformed coefficients are used to store the coding status of coefficients instead of the lists used in SPIHT. In order to achieve high coding efficiency, shape-adaptive discrete wavelet transforms are used to transformation arbitrarily shaped objects. A compact emplacement of the transformed coefficients is also proposed to further reduce working memory. The LMZC carefully treats "don't care" nodes in the wavelet tree and does not use bits to code such nodes. Comparison of LMZC with SPIHT shows that for coding a 768 /spl times/ 512 color image, LMZC saves at least 5.3 MBytes of memory but only increases a little execution time and reduces minor peak signal-to noise ratio (PSNR) values, thereby making it highly promising for some memory limited applications.  相似文献   

19.
高西奇  甘露  邹采荣 《电子学报》2001,29(6):796-799
本文给出了一类对称—反对称多小波滤波器组参数化设计方法,并在此基础上提出了新的多小波零树编码方案.实验表明,与常用的9-7单小波相比,我们所设计的双正交对称—反对称多小波可使编码性能有较大幅度的改善.  相似文献   

20.
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号