共查询到20条相似文献,搜索用时 15 毫秒
1.
根据实际中Turbo译码器硬件实现的重要性,提出了一种适合于并行计算的改进Log-MAP译码算法,即在其译码计算中间参数的过程中,将具有n个输入变量的最大近似算法max*运算简化为取最大值的max运算和相关函数的计算,减少了存储量,有效实现了低复杂度的Turbo译码器的硬件结构。将此改进的算法应用于CCSDS标准和Wi MAX标准中,仿真结果表明,所提出的简化的近似算法与传统的Log-MAP算法对比,有效降低了译码复杂度和时延,而且纠错性能接近Log-MAP算法,便于实际工程应用。 相似文献
2.
Yi Wu Shi-Dong Zhou Yan Yao 《Electronics letters》2000,36(25):2079-2081
A new decision-aided soft a posteriori probability (APP) algorithm for iterative differential phase-shift keying (DPSK) signal demodulation/decoding in a Rayleigh flat-fading channel is presented. Compared with conventional APP algorithms for iterative DPSK, the new algorithm results in considerably lower decoding cost yet can achieve nearly the same performance 相似文献
3.
4.
Vanstraceele C. Geller B. Brossier J.-M. Barbot J.-P. 《Communications, IEEE Transactions on》2008,56(12):1985-1987
In this letter we present a low-complexity architecture designed for the decoding of block turbo codes. In particular we simplify the implementation of Pyndiah?s algorithm by not storing any of the concurrent codewords generated by the list decoder. 相似文献
5.
The core-based tree (CBT) has three main limitations. The selection of the core node is difficult and the traffic is concentrated near the core router. Also the CBT does not consider an optimisation of network resource utilisation. In the proposed non-core based tree (NCBT), an on-tree node is assigned to each incoming member for multicasting such that the maximum end-to-end delay and the tree costs are jointly minimised. 相似文献
6.
7.
High-speed VLSI architecture for parallel Reed-Solomon decoder 总被引:3,自引:0,他引:3
Hanho Lee 《Very Large Scale Integration (VLSI) Systems, IEEE Transactions on》2003,11(2):288-294
This paper presents high-speed parallel Reed-Solomon (RS) (255,239) decoder architecture using modified Euclidean algorithm for the high-speed multigigabit-per-second fiber optic systems. Pipelining and parallelizing allow inputs to be received at very high fiber-optic rates and outputs to be delivered at correspondingly high rates with minimum delay. A parallel processing architecture results in speed-ups of as much as or more than 10 Gb, since the maximum achievable clock frequency is generally bounded by the critical path of the modified Euclidean algorithm block. The parallel RS decoders have been designed and implemented with the 0.13-/spl mu/m CMOS standard cell technology in a supply voltage of 1.1 V. It is suggested that a parallel RS decoder, which can keep up with optical transmission rates, i.e., 10 Gb/s and beyond, could be implemented. The proposed channel = 4 parallel RS decoder operates at a clock frequency of 770 MHz and has a data processing rate of 26.6 Gb/s. 相似文献
8.
9.
A novel VLSI architecture is proposed for implementing a long constraint length Viterbi decoder (VD) for code rate k/n. This architecture is based on the encoding structure where k input bits are shifted into k shift registers in each cycle. The architecture is designed in a hierarchical manner by breaking the system into several levels and designing each level independently. The tasks in the design of each level range from determining the number of computation units, and the interconnection between the units, to the allocation and scheduling of operations. Additional design issues such as in-place storage of accumulated path metrics and trace back implementation of the survivor memory have also been addressed. The resulting architecture is regular, has a foldable global topology and is very flexible. It also achieves a better than linear trade-off between hardware complexity and computation time 相似文献
10.
A low-complexity design architecture for implementing the Successive Cancellation (SC) decoding algorithm for polar codes is presented. Hardware design of polar decoders is accomplished using SC decoding due to the reduced intricacy of the algorithm. Merged processing element (MPE) block is the primary area occupying factor of the SC decoder as it incorporates numerous sign and magnitude conversions. Two’s complement method is typically used in the MPE block of SC decoder. In this paper, a low-complex MPE architecture with minimal two’s complement conversion is proposed. A reformulation is also applied to the merged processing elements at the final stage of SC decoder to generate two output bits at a time. The proposed merged processing element thereby reduces the hardware complexity of the SC decoder and also reduces latency by an average of 64%. An SC decoder with code length 1024 and code rate 1/2 was designed and synthesized using 45-nm CMOS technology. The implementation results of the proposed decoder display significant improvement in the Technology Scaled Normalized Throughput (TSNT) value and an average 48% reduction in hardware complexity compared to the prevalent SC decoder architectures. Compared to the conventional SC decoder, the presented method displayed a 23% reduction in area. 相似文献
11.
In this letter, tradeoffs between very large scale integration implementation complexity and performance of block turbo decoders are explored. We address low-complexity design strategies on choosing the scaling factor of the log extrinsic information and on reducing the number of hard-decision decodings during a Chase search. 相似文献
12.
13.
基于FPGA的汉明码编译码系统 总被引:3,自引:0,他引:3
讨论了汉明码编译码基本原理,并在FPGA中对汉明码编译码原理进行验证仿真,在此基础上提出扩展汉明码的概念并进行仿真。这两种设计均下载至FPGA中实现,结果证明,本设计达到了纠错检错的要求,具有一定的实践指导意义。 相似文献
14.
This paper presents a novel multi-Gb/s multi-mode LDPC decoder architecture and efficient design techniques for gigabit wireless communications. An efficient dynamic and fixed column-shifting scheme is presented for multi-mode architectures. A novel low-complexity local switch is proposed to implement the dynamic and fixed column-shifting scheme. Furthermore, an efficient quantization method and the usage of a one׳s-complement scheme instead of a two׳s-complement scheme are explored. The proposed decoder achieves very high throughput with minimal area overhead. Post layout results using TSMC 65-nm CMOS technology shows much better throughput, as well as better area- and energy-efficiency, compared to other multi-mode LDPC decoders. 相似文献
15.
Szu-Wei Lee C.-C. Jay Kuo 《Journal of Visual Communication and Image Representation》2011,22(1):61-72
Context-based adaptive variable length coding (CAVLC) and universal variable length coding (UVLC) are two entropy coding tools that are supported in all profiles of H.264/AVC coders. In this paper, we investigate the relationship between the bit rate and the CAVLC/UVLC decoding complexity. This relationship can help the encoder choose the best coding parameter to yield the best tradeoff between the rate, distortion, and the decoding complexity performance. A practical application of CAVLC/UVLC decoding complexity reduction is also discussed. 相似文献
16.
Image interpolation, the problem of producing a sequence of intermediate frames between two input images, is of significant interest. The application of image interpolation is numerous, including animation of still images, view interpolation and temporal interpolation. In this paper, we achieve an improved path-based interpolation method based on the original path-based framework, by introducing two innovative improvements. On one hand, we use optical flow to decide the direction of path, to constraint the path length and to maintain the global path coherency, which improves the efficiency significantly. On the other hand, we introduce the pixel interlacing model to obtain more accurate optical flows so that the accuracy of path selection will be improved a lot. Our improved path-based method performs as well as the original method in various interpolation applications in image quality and surpass the original method by a large scale in efficiency. 相似文献
17.
针对散射通信系统多径效应严重,常用的频域非线性均衡算法复杂度过高的情况,提出了一种基于噪声预测的低复杂度判决反馈均衡方法。根据常用散射通信多径信道模型,在时延功率谱呈指数衰减的情况下,通过曲线拟合的方法预测非线性判决反馈均衡器的部分系数,从而达到降低算法复杂度的目的。仿真结果表明,所提方法不仅具有较低的复杂度,也具备良好的误码性能,在合适的阈值下,与原有算法的性能损失可控制在0.1 dB以内。 相似文献
18.
19.
对于图像放大技术而言,重要的就是要权衡到图像质量以及计算复杂度.传统的基于线性或三次样条插值的方法会带来图像模糊和锯齿边缘等失真,为了解决这一问题,人们提出一种基于迭代和学习的算法,但是这种方法带来了很高的计算复杂度.综合以上几点本文提出了一种基于自适应协方差的图像放大方法(adaptive covariance-based edge diffusion,ACED).该方法能很好地权衡图像放大性能和复杂度之间的关系.在这种方法中,提出了一种联合边缘判别准则,并自适应选择扩散模板来估计局部协方差系数,以高效的减少图像放大带来的失真.实验结果表明,所提出的方法在主观质量和客观质量上都有很大的提升,同时也具有较低的计算复杂度. 相似文献