首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, space‐time block coding has been used in conjunction with Turbo codes to provide good diversity and coding gains. A new method of dividing turbo encoder and decoder into several parallel encoding and decoding blocks is considered. These blocks work simultaneously and yield a faster coding scheme in comparison to classical Turbo codes. The system concatenates fast Turbo coding as an outer code with Alamouti's G2 space‐time block coding scheme as an inner code, achieving benefits associated with both techniques including acceptable diversity and coding gain as well as short coding delay. In this paper, fast fading Rayleigh and Rician channels are considered for discussion. For Rayleigh fading channels, a fixed frame size and channel memory length of 5000 and 10, respectively, the coding gain is 7.5 dB and bit error rate (BER) of 10?4 is achieved at 7 dB. For the same frame size and channel memory length, Rician fading channel yields the same BER at about 4.5 dB. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, we propose and present implementation results of a high‐speed turbo decoding algorithm. The latency caused by (de)interleaving and iterative decoding in a conventional maximum a posteriori turbo decoder can be dramatically reduced with the proposed design. The source of the latency reduction is from the combination of the radix‐4, center to top, parallel decoding, and early‐stop algorithms. This reduced latency enables the use of the turbo decoder as a forward error correction scheme in real‐time wireless communication services. The proposed scheme results in a slight degradation in bit error rate performance for large block sizes because the effective interleaver size in a radix‐4 implementation is reduced to half, relative to the conventional method. To prove the latency reduction, we implemented the proposed scheme on a field‐programmable gate array and compared its decoding speed with that of a conventional decoder. The results show an improvement of at least five fold for a single iteration of turbo decoding.  相似文献   

3.
SISO decoding for block codes can be carried out based on a trellis representation of the code. However, the complexity entailed by such decoding is most often prohibitive and thus prevents practical implementation. This paper examines a new decoding scheme based on the soft-output Viterbi algorithm (SOVA) applied to a sectionalized trellis for linear block codes. The computational complexities of the new SOVA decoder and of the conventional SOVA decoder, based on a bit-level trellis, are theoretically analyzed and derived for different linear block codes. These results are used to obtain optimum sectionalizations of a trellis for SOVA. For comparisons, the optimum sectionalizations for Maximum A Posteriori (MAP) and Maximum Logarithm MAP (Max-Log-MAP) algorithms, and their corresponding computational complexities are included. The results confirm that the new SOVA decoder is the most computationally efficient SISO decoder, in comparisons to MAP and Max-Log-MAP algorithms. The simulation results of the bit error rate (BER) performance, assuming binary phase -- shift keying (BPSK) and additive white Gaussian noise (AWGN) channel, demonstrate that the performance of the new decoding scheme is not degraded. The BER performance of iterative SOVA decoding of serially concatenated block codes shows no difference in the quality of the soft outputs of the new decoding scheme and of the conventional SOVA.  相似文献   

4.
高码率自适应Turbo编译码器的设计与FPGA实现   总被引:1,自引:1,他引:0  
提出了一种高码率自适应Turbo编译码器的FPGA实现方案。在编码模块中采用特定参数的分组螺旋对称交织器,使编码器能通过删余构造高码率,且能通过相同的结尾比特使两个分量编码器的寄存器状态均归零。在SOVA译码模块中,各状态下路径的累积度量值的并行计算和可靠性值的并行更新使译码速度大大提高。仿真结果表明,该高码率自适应编译码器有良好的误码性能和较高的实用价值。  相似文献   

5.
We present an efficient VLSI architecture for 3GPP LTE/LTE-Advance Turbo decoder by utilizing the algebraic-geometric properties of the quadratic permutation polynomial (QPP) interleaver. The high-throughput 3GPP LTE/LTE-Advance Turbo codes require a highly-parallel decoder architecture. Turbo interleaver is known to be the main obstacle to the decoder parallelism due to the collisions it introduces in accesses to memory. The QPP interleaver solves the memory contention issues when several MAP decoders are used in parallel to improve Turbo decoding throughput. In this paper, we propose a low-complexity QPP interleaving address generator and a multi-bank memory architecture to enable parallel Turbo decoding. Design trade-offs in terms of area and throughput efficiency are explored to find the optimal architecture. The proposed parallel Turbo decoder has been synthesized, placed and routed in a 65-nm CMOS technology with a core area of 8.3 mm2 and a maximum clock frequency of 400 MHz. This parallel decoder, comprising 64 MAP decoder cores, can achieve a maximum decoding throughput of 1.28 Gbps at 6 iterations  相似文献   

6.
Turbo乘积码(TPC)是一种性能优秀的纠错编码方法,它具有译码复杂度低、译码延时小等优点,且在低信噪比下可以获得近似最优的性能。介绍了基于Chase算法的三维TPC软输入软输出(SISO)迭代译码算法,提出了三维TPC译码器硬件设计方案并在FPGA芯片上进行了仿真和验证。测试结果表明,该译码器具有较高的纠错能力,满足移动通信误码率的要求。  相似文献   

7.
分组相关快衰落信道下自适应Turbo码译码算法研究   总被引:2,自引:1,他引:1  
分析了分组相关快衰落信道的特性,推导出该信道下Turbo码译码算法;研究了迭代次数对Turbo编码系统的影响,在小信噪比弥散度条件下,提出基于平均信噪比的最佳迭代译码次数自适应选择方案,可以兼顾译码性能和译码速度,得到较低的平均误比特率和较高的平均译码速度。仿真结果说明,本文提出的Turbo码译码算法,降低了对信道估计精度要求的同时,得到精确信道估计时的性能;对于目标误比特率为10^-4时,采用自适应Turbo译码算法,与固定迭代4次相比,平均误比特率降低了40%,提高了系统性能;而与固定迭代8次相比,迭代次数降低了约1/4,提高了译码速度。  相似文献   

8.
吴团锋  杨喜根 《通信学报》2006,27(7):106-111
针对准相干解调Turbo编码GMSK信号,提出了一种简便的迭代信道估计算法。该方法基于Turbo码的迭代译码原理,将信道估计和译码联合考虑,利用译码器输出反馈进行迭代信道估计,从而提高了估计精度。仿真结果表明,该方法能显著地改善系统误码率性能。  相似文献   

9.
李建平  梁庆林 《电讯技术》2004,44(6):119-121
本文通过调整迭代解码过程中系统位接收值的加权系数,提出了一种Turbo码加权迭代解码算法。该算法改变了迭代运算后Turbo码解码器输出软值中系统位接收值信息和它的外部估计信息的比重,使Turbo码无论在低信噪比或是在高信噪比时均具有优良的纠错性能。仿真结果显示,采用Turbo码加权迭代解码算法,不仅能提高Turbo码的收敛速度,而且能进一步降低Turbo码解码时的地板值,使Turbo码的比特误码率在高、低信噪比时都能够得到进一步改善。  相似文献   

10.
A carrier phase recovery scheme suited for turbo‐coded systems with pre‐coded Gaussian minimum shift keying (GMSK) modulation is proposed and evaluated in terms of bit‐error‐rate (BER) performance. This scheme involves utilizing the extrinsic information obtained from the turbo‐decoder to aid an iterative carrier phase estimation process, based on a maximum‐likelihood (ML) strategy. The phase estimator works jointly with the turbo‐decoder, using the updated extrinsic information from the turbo‐decoder in every iterative decoding. A pre‐coder is used to remove the inherent differential encoding of the GMSK modulation. Two bandwidths of GMSK signals are considered: BT=0.5 and 0.25, which are recommended by the European Cooperation for Space Standardization (ECSS). It is shown that the performance of this technique is quite close to the perfect synchronized system within a wide range of phase errors. This technique is further developed to recover nearly any phase error in [?π,+π] by increasing the number of phase estimators and joint decoding units. This, however, will increase the complexity of the system. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
To satisfy the advanced forward-error-correction (FEC) standards, in which the Convolutional code and Turbo code may co-exit, a prototype design of a unified Convolutional/Turbo decoder is proposed. In this paper, we systematically analyze the timing charts of both the Viterbi algorithm and the MAP algorithm. Then, three techniques, including Distribution, Pointer, and Parallel schemes, are introduced; they can be used as flexible tools in timing-chart analysis to either reduce memory size or to increase throughput rate. Furthermore, we propose a tile-based methodology to analyze the key features of timing charts, such as computing/memory units and hardware utilization. On the basis of the timing analysis, we developed a VA/MAP timing chart that has three modes (VA mode, MAP mode, and concurrent VA/MAP mode) by complementing the idle time of both VA and MAP decoding procedures. The new combined timing analysis helps us for constructing a unified component decoder with near 100% utilization rate of the processing element (PE) in both VA/MAP decoding functions.   相似文献   

12.
A neural network (NN)-based decoding algorithm of block Markov superposition transmission (BMST) was researched.The decoders of the basic code with different network structures and representations of training data were implemented using NN.Integrating the NN-based decoder of the basic code in an iterative manner,a sliding window decoding algorithm was presented.To analyze the bit error rate (BER) performance,the genie-aided (GA) lower bounds were presented.The NN-based decoding algorithm of the BMST provides a possible way to apply NN to decode long codes.That means the part of the conventional decoder could be replaced by the NN.Numerical results show that the NN-based decoder of basic code can achieve the BER performance of the maximum likelihood (ML) decoder.For the BMST codes,BER performance of the NN-based decoding algorithm matches well with the GA lower bound and exhibits an extra coding gain.  相似文献   

13.
Reconfigurable Hardware Architectures for Sequential and Hybrid Decoding   总被引:1,自引:0,他引:1  
A novel reconfigurable sequential decoder architecture based on the Fano algorithm is presented in which the constraint length, the threshold spacing, and the time-out threshold are all run time reconfigurable. To maximize decoding performance, a maximum possible backward depth (of a whole frame) is performed. This is achieved by using shift registers combined with memory to store the information of an entire visited path. A field-programmable gate array) prototype of the decoder is built and actual hardware decoding performances in terms of decoding speeds, bit error rates (BERs), and buffer overflow rates, are obtained and comparisons made. To overcome the decoding delay that is inherent in sequential decoders, a hybrid scheme, including simple block codes and cyclic redundancy check is proposed to limit the number of backward search operations that the sequential decoder has to execute. As a result, a significant reduction in decoding delay and buffer overflow rate is achieved while maintaining comparative decoding performance in terms of BER  相似文献   

14.
Jian Wang  Yubai Li  Huan Li 《ETRI Journal》2013,35(5):767-774
In this paper, a novel parallel Viterbi decoding scheme is proposed to decrease the decoding latency and power consumption for the software‐defined radio (SDR) system. It implements a divide‐and‐conquer approach by first dividing a block into a series of subblocks, then performing independent Viterbi decoding for each subsequence, and finally merging the surviving subpaths into the final path. Moreover, a network‐on‐chip‐based SDR platform is used to evaluate the performance of the proposed parallel Viterbi decoding scheme. The experiment results show that our scheme can speed up the Viterbi decoding process without increasing the BER, and it performs better than the current state‐of‐the‐art methods.  相似文献   

15.
Low-density parity-check (LDPC) codes and convolutional Turbo codes are two of the most powerful error correcting codes that are widely used in modern communication systems. In a multi-mode baseband receiver, both LDPC and Turbo decoders may be required. However, the different decoding approaches for LDPC and Turbo codes usually lead to different hardware architectures. In this paper we propose a unified message passing algorithm for LDPC and Turbo codes and introduce a flexible soft-input soft-output (SISO) module to handle LDPC/Turbo decoding. We employ the trellis-based maximum a posteriori (MAP) algorithm as a bridge between LDPC and Turbo codes decoding. We view the LDPC code as a concatenation of n super-codes where each super-code has a simpler trellis structure so that the MAP algorithm can be easily applied to it. We propose a flexible functional unit (FFU) for MAP processing of LDPC and Turbo codes with a low hardware overhead (about 15% area and timing overhead). Based on the FFU, we propose an area-efficient flexible SISO decoder architecture to support LDPC/Turbo codes decoding. Multiple such SISO modules can be embedded into a parallel decoder for higher decoding throughput. As a case study, a flexible LDPC/Turbo decoder has been synthesized on a TSMC 90 nm CMOS technology with a core area of 3.2 mm2. The decoder can support IEEE 802.16e LDPC codes, IEEE 802.11n LDPC codes, and 3GPP LTE Turbo codes. Running at 500 MHz clock frequency, the decoder can sustain up to 600 Mbps LDPC decoding or 450 Mbps Turbo decoding.  相似文献   

16.
为满足无线通信中高吞吐、低功耗的要求,并行译码器的结构设计得到了广泛的关注。基于并行Turbo码译码算法,研究了前后向度量计算中的对称性,提出了一种基于前后向合并计算的高效并行Turbo码译码器结构设计方案,并进行现场可编程门阵列(field-programmable gate array,FPGA)实现。结果表明,与已有的并行Turbo码译码器结构相比,本文提出的设计结构使状态度量计算模块的逻辑资源降低50%左右,动态功耗在125 MHz频率下降低5.26%,同时译码性能与并行算法的译码性能接近。  相似文献   

17.
In hardware implementations of many signal processing functions, timing errors on different circuit signals may have largely different importance with respect to the overall signal processing performance. This motivates us to apply the concept of unequal error tolerance to enable the use of voltage overscaling at minimal signal processing performance degradation. Realization of unequal error tolerance involves two main issues, including how to quantify the importance of each circuit signal and how to incorporate the importance quantification into signal processing circuit design. We developed techniques to tackle these two issues and applied them to two types of trellis decoders including Viterbi decoder for convolutional code decoding and Max-Log-Maximum A Posteriori (MAP) decoder for Turbo code decoding. Simulation results demonstrated promising energy saving potentials of the proposed design solution on both trellis decoding computation and memory storage at small decoding performance degradation.   相似文献   

18.
The sliding window (SW) approach has been proposed as an effective means of reducing the memory requirements as well as the decoding latency of the maximum a posteriori (MAP) based soft-input soft-output (SISO) decoder in a Turbo decoder. In this paper, we present sub-banked memory implementations (both single port and dual port) of the SW SISO decoder that achieves high throughput, low decoding latency, and reduced memory energy consumption. Our contributions include derivation of the optimal memory sub-banked structure for different SW configurations, study of the relationship between memory size and energy consumption for different SW configurations and study of the effect of number of sub-banks on the throughput/decoding latency for a given SW configuration.  相似文献   

19.
A cross‐level pre‐RAKE combining (PRC) scheme for time hopping pulse amplitude modulation ultra wideband (TH‐PAM UWB) transmitter is studied in this paper. A two‐stage cross‐level PRC (CL‐PRC) scheme is proposed. The conventional PRC schemes suppress all the chip‐wise interference. However, the proposed scheme suppresses only the specific frame‐wise inter‐symbol interference (ISI) by exploiting the characteristic that the information bits are transmitted only at ultra short time slots. This results in a low complexity pre‐equalizer without bit error rate (BER) performance degradation. Furthermore, an order selection rule is presented to achieve the tradeoff between signal‐to‐interference ratio (SIR) and computational complexity. Simulation results illustrate the superior SIR and BER performance of our proposal. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
根据实际中Turbo译码器硬件实现的重要性,提出了一种适合于并行计算的改进Log-MAP译码算法,即在其译码计算中间参数的过程中,将具有n个输入变量的最大近似算法max*运算简化为取最大值的max运算和相关函数的计算,减少了存储量,有效实现了低复杂度的Turbo译码器的硬件结构。将此改进的算法应用于CCSDS标准和Wi MAX标准中,仿真结果表明,所提出的简化的近似算法与传统的Log-MAP算法对比,有效降低了译码复杂度和时延,而且纠错性能接近Log-MAP算法,便于实际工程应用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号