共查询到20条相似文献,搜索用时 93 毫秒
1.
设计通用的宏块并行的H.264帧内解码次序,避免了解码时的数据冲突,进而设计了存储器及计算单元可复用的帧内预测宏块并行解码单元,在解码速度提高的同时,尽量避免了资源的开销.通过对设计的并行解码器速度的测试及DC综合的结果,验证了设计的可复用的宏块并行帧内解码器的VLSI结构有效性,每个宏块解码平均速度到达了113cycles. 相似文献
2.
针对多小区大规模阵列天线系统中干扰小区的导频复用造成的导频污染和解码性能下降问题,提出了基于ICA(独立分量分析)盲解码算法。所提盲解码算法,利用ICA法对接收多小区用户信号进行分离解码,不需要发射导频序列,避免了导频污染,提高了解码性能。所提盲解码算法在解码过程中同时估计各个用户波达方向,利用波达方向信息克服ICA方法分离顺序的不确定性,识别期望用户的信号。理论分析和仿真结果表明,所提盲解码方法比广泛应用的MMSE解码算法和最近提出的基于特征值的盲解码方法具有更好的性能。 相似文献
3.
4.
5.
面向SOC平台的高效H.264解码架构及实现 总被引:1,自引:0,他引:1
提出了一种面向SOC平台的高效H.264解码架,通过解码和去块效应的宏块级流水线化处理,解决了视频流支持灵活的宏块排序(FMO)和任意条带顺序(ASO)技术时码流内的宏块次序与去块效应操作中要求的光栅扫描顺序不同的矛盾,更有利于硬件实现。在硬件设计中,采用cache缓存法暂存用于去块效应的临时数据,同时采用双并行总线满足较高的系统带宽要求。FPGA和SOC的实现结果表明,在系统时钟为166 MHz时,设计结果完全满足1080HD(1920×1088@30 fps)的解码要求。 相似文献
6.
7.
8.
在对Abis接口的信令进行解析分析时,绝大部分信令消息本身解码结果不携带位置区、小区等信息,而这些信息是后续应用分析所必须的维度。因此需要对不含位置小区信息的解码结果补全位置区、小区信息。这也是Abis接口信令监测系统所要解决的核心问题之一。通过Abis接口特殊的链路标识和系统消息的关联,可以得到信令消息当前小区的位置区、小区。通过关联呼叫接续过程中在小区间的切换过程,基于概率论,经过一段时间的自学习,可以学习邻区的位置区和小区号以及邻区的BCCH频点。 相似文献
9.
10.
根据AVS标准中的插值算法特点提出了一种用于AVS解码芯片的运动补偿硬件模块设计方案.该设计对AVS标准定义的多种插值模式进行了合理优化和复用,有效节省了硬件资源.同时,提出了一种按照宏块划分类型获取参考数据的方法,减小了数据读取带宽,提高了存储器读取效率.综合仿真结果表明,该设计占用资源少,达到了实时解码的需求. 相似文献
11.
Combining the advantages of both the genetic algorithm (GA) and the chase decoding algorithm, a novel improved decoding algorithm of the block turbo code (BTC) with lower computation complexity and more rapid decoding speed is proposed in order to meet the developing demands of optical communication systems. Compared with the traditional chase decoding algorithm, the computation complexity can be reduced and the decoding speed can be accelerated by applying the novel algorithm. The simulation results show that the net coding gain (NCG) of the novel BTC decoding algorithm is 1.1 dB more than that of the traditional chase decoding algorithm at the bit error rate (BER) of 10^-6. Therefore, the novel decoding algorithm has better decoding correction-error performance and is suitable for the BTC in optical communication systems. 相似文献
12.
为了弥补Min-Sum译码算法相对于LLR-BP算法的性能缺陷,降低LLR-BP算法的实现复杂度,提出一种改进型Min-Sum译码算法,将Normalized BP-Based和Offset BP-Based 2种算法有效地结合,在计算校验节点消息时,同时引进校正因子和偏移因子,并通过最小均方差准则来选择参数。仿真结果表明,在误码率相同的条件下,改进型Min-Sum译码算法比Min-Sum算法、Normalized BP-Based算法和Offset BP-Based算法具有更好的译码性能,译码性能逼近于LLR-BP译码算法。 相似文献
13.
To improve error-correcting performance, an iterative concatenated soft decoding algorithm for Reed-Solomon (RS) codes is presented in this article. This algorithm brings both complexity as well as advantages in performance over presently popular sott decoding algorithms. The proposed algorithm consists of two powerful soft decoding techniques, adaptive belief propagation (ABP) and box and match algorithm (BMA), which are serially concatenated by the accumulated log-likelihood ratio (ALLR).Simulation results show that, compared with ABP and ABP-BMA algorithms, the proposed algorithm can bring more decoding gains and a better tradeoffbetween the decoding performance and complexity. 相似文献
14.
A recursive procedure is derived for decoding of rateR = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Δ branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A "real-time," i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding. 相似文献
15.
Kamiya N. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1997,43(5):1477-1488
We describe an efficient algorithm for successive errors-and-erasures decoding of BCH codes. The decoding algorithm consists of finding all necessary error locator polynomials and errata evaluator polynomials, choosing the most appropriate error locator polynomial and errata evaluator polynomial, using these two polynomials to compute a candidate codeword for the decoder output, and testing the candidate for optimality via an originally developed acceptance criterion. Even in the most stringent case possible, the acceptance criterion is only a little more stringent than Forney's (1966) criterion for generalised minimum distance (GMD) decoding. We present simulation results on the error performance of our decoding algorithm for binary antipodal signals over an AWGN channel and a Rayleigh fading channel. The number of calculations of elements in a finite field that are required by our algorithm is only slightly greater than that required by hard-decision decoding, while the error performance is almost as good as that achieved with GMD decoding. The presented algorithm is also applicable to efficient decoding of product RS codes 相似文献
16.
17.
为了提高多元低密度奇偶校验(LDPC, low density parity-check)码符号翻转译码算法的性能并降低译码的复杂度,提出了基于平均概率和停止准则的多元LDPC码加权符号翻转译码(APSCWSF, average probability and stopping crite-rion weighted symbol flipping)算法。该算法将校验节点邻接符号节点的平均概率信息作为权重,使翻转函数更加有效,提高符号的翻转效率,进而改善译码性能。并且通过设置迭代停止准则进一步加快算法的收敛速度。仿真结果显示,在加性高斯白噪声信道下,误符号率为10?5时,相比WSF算法、NSCWSF算法(Osc=10)和NSCWSF算法(Osc=6),APSCWSF算法(Osc=10)分别获得约0.68 dB、0.83 dB和0.96 dB的增益。同时,APSCWSF算法(Osc=6)的平均迭代次数也分别降低78.60% ~79.32%、74.89% ~ 75.95% 和67.20% ~70.80%。 相似文献
18.
LDPC码在IEEE802.16e标准中的编译码分析 总被引:6,自引:1,他引:5
为了能够在保证译码性能的同时进一步降低译码的复杂度,该标准还在译码的过程中引入由M Fossorier等人提出的BP-Based算法,并分析了这两类算法的实际译码性能。实验仿真结果表明,BP-Based算法与LLR-BP算法相比,在不同码长及不同码率条件下可以更好地实现译码算法度和译码性能的有效均衡,因而更加适合作为硬件译码器的优化算法而应用到实际的通信系统中。 相似文献
19.
传统的矩形积分双谱特征提取存在以下不足:第一是在以往的研究中没有讨论过积分路径个数对识别率的影响;第二是在矩形积分双谱算法中存在着部分积分路径对识别效果贡献不足、甚至带来负作用的缺点。为克服这些问题,本文提出了一种基于改进双谱和时域分析相结合的通信信号个体识别方法,首先通过实验得到了积分路径和识别率的性能曲线,选定最佳积分路径个数;其后利用最大能量区间比重算法剔除掉对识别效果贡献不足、具有负作用的积分路径;最后结合信号的时域特征并利用支持向量机分类器进行个体识别。本文用了在较低信噪比环境下的实际信号验证了提出算法,实验结果表明,该方法能够较好解决同类辐射源信号的个体识别问题,平均正确识别率高于95%。 相似文献
20.
An efficient algorithm is presented for maximum-likelihood soft-decision decoding of the Leech lattice. The superiority of this decoder with respect to both computational and memory complexities is demonstrated in comparison with previously published decoding methods. Gain factors in the range of 2-10 are achieved. The authors conclude with some more advanced ideas for achieving a further reduction of the algorithm complexity based on a generalization of the Wagner decoding method to two parity constraints. A comparison with the complexity of some trellis-coded modulation schemes is discussed. The decoding algorithm presented seems to achieve a computational complexity comparable to that of the equivalent trellis codes 相似文献