首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A novel Joint Source and Channel Decoding (JSCD) scheme for Variable Length Codes (VLCs) concatenated with turbo codes utilizing a new super-trellis decoding algorithm is presented in this letter. The basic idea of our decoding algorithm is that source a priori information with the form of bit transition probabilities corresponding to the VLC tree can be derived directly from sub-state transitions in new composite-state represented super-trellis. A Maximum Likelihood (ML) decoding algorithm for VLC sequence estimations based on the proposed super-trellis is also described. Simulation results show that the new iterative decoding scheme can obtain obvious encoding gain especially for Reversible Variable Length Codes (RVLCs), when compared with the classical separated turbo decoding and the previous joint decoding not considering source statistical characteristics.  相似文献   

2.
针对常用的非穷尽列表形式后验概率检测算法直接采用恒定且较大的列表长度,导致列表冗余度大的问题,该文提出了一种自适应长度的列表球形译码算法(Adaptive Size List Sphere Decoding, ASLSD)。在算法中通过更新检测半径和设置停止条件,使检测列表长度可随信噪比和迭代次数自适应变化。而且通过将列表操作与LSD (List Sphere Decoding)检测相结合,避免了符号序列在不同半径下的重复检测。仿真表明,在较小性能损失的前提下,该算法可以大大减小所需检测列表的长度,进而有效降低接收机的复杂度。  相似文献   

3.
In this paper, a new still image coding scheme is presented. In contrast with standard tandem coding schemes, where the redundancy is introduced after source coding, it is introduced before source coding using real BCH codes. A joint channel model is first presented. The model corresponds to a memoryless mixture of Gaussian and Bernoulli-Gaussian noise. It may represent the source coder, the channel coder, the physical channel, and their corresponding decoder. Decoding algorithms are derived from this channel model and compared to a state-of-art real BCH decoding scheme. A further comparison with two reference tandem coding schemes and the proposed joint coding scheme for the robust transmission of still images has been presented. When the tandem scheme is not accurately tuned, the joint coding scheme outperforms the tandem scheme in all situations. Compared to a tandem scheme well tuned for a given channel situation, the joint coding scheme shows an increased robustness as the channel conditions worsen. The soft performance degradation observed when the channel worsens gives an additional advantage to the joint source-channel coding scheme for fading channels, since a reconstruction with moderate quality may be still possible, even if the channel is in a deep fade.  相似文献   

4.
An initial bootstrap step for the decoding of low-density parity-check (LDPC) codes is proposed. Decoding is initiated by first erasing a number of less reliable bits. New values and reliabilities are then assigned to erasure bits by passing messages from nonerasure bits through the reliable check equations. The bootstrap step is applied to the weighted bit-flipping algorithm to decode a number of LDPC codes. Large improvements in both performance and complexity are observed.  相似文献   

5.
该文提出了一种级联的卷积码混合译码算法。该算法由两级译码实现,第1级采用置信传播(Belief-Propagation, BP)算法,而第2级采用修改的维特比译码(Modified Viterbi Decoding, MVD)算法。BP首先对接收序列进行预译码,并利用伴随式将译码输出的对数似然比值分为可靠的和不可靠的两类。不可靠的对数似然比值用接收符号取代,可靠的部分硬判决为编码符号,它们共同组成混合序列。随后,MVD对该混合序列作进一步纠错译码。仿真表明,与传统的维特比算法相比,所提出的混合译码算法的误码性能只有很小的损失,其译码平均复杂度在中高信噪比条件下有明显降低。  相似文献   

6.
Several recent publications have shown that joint source-channel decoding could be a powerful technique to take advantage of residual source redundancy for fixed- and variable-length source codes. This letter gives an in-depth analysis of a low-complexity method recently proposed by Guivarch et al., where the redundancy left by a Huffman encoder is used at a bit level in the channel decoder to improve its performance. Several simulation results are presented, showing for two first-order Markov sources of different sizes that using a priori knowledge of the source statistics yields a significant improvement, either with a Viterbi channel decoder or with a turbo decoder.  相似文献   

7.
A universal variable-to-fixed length algorithm for binary memoryless sources which converges to the entropy of the source at the optimal rate is known. We study the problem of universal variable-to-fixed length coding for the class of Markov sources with finite alphabets. We give an upper bound on the performance of the code for large dictionary sizes and show that the code is optimal in the sense that no codes exist that have better asymptotic performance. The optimal redundancy is shown to be H log log M/log M where H is the entropy rate of the source and M is the code size. This result is analogous to Rissanen's (1984) result for fixed-to-variable length codes. We investigate the performance of a variable-to-fixed coding method which does not need to store the dictionaries, either at the coder or the decoder. We also consider the performance of both these source codes on individual sequences. For individual sequences we bound the performance in terms of the best code length achievable by a class of coders. All the codes that we consider are prefix-free and complete  相似文献   

8.
For one class of Low-Density Parity-Check(LDPC)codes with low row weight in their parity check matrix,a new Syndrome Decoding(SD)based on the heuristic Beam Search(BS),labeled as SD-BS,is put forward to improve the error performance.First,two observations are made and verified by simulation results.One is that in the SNR region of interest,the hard-decision on the corrupted sequence yields only a handful of erroneous bits.The other is that the true error pattern for the nonzero syndrome has a high probability to survive the competition in the BS,provided sufficient beam width.Bearing these two points in mind,the decoding of LDPC codes is transformed into seeking an error pattern with the known decoding syndrome.Secondly,the effectiveness of SD-BS depends closely on how to evaluate the bit reliability.Enlightened by a bit-flipping definition in the existing literature,a new metric is employed in the proposed SD-BS.The strength of SD-BS is demonstrated via applying it on the corrupted sequences directly and the decoding failures of the Belief Propagation(BP),respectively.  相似文献   

9.
一种分簇无线传感器网络中的分布式信源编码算法   总被引:1,自引:0,他引:1  
针对密集型无线传感器网络中信息存在大量冗余的问题,该文提出了一种适用于分簇无线传感网络的分布式信源编码算法.该算法以边信息作为初始参考信源,利用信源间的相关性来决定各信源的编码顺序和参考信源,然后由各信源相对于参考信源进行相关编码,接收端则根据编码顺序和参考信源进行相关译码.该文同时针对模值编码的方式,给出了一种低复杂度的译码算法.理论分析和仿真结果表明,将该算法应用于分簇路由协议中可以有效地降低节点的发送比特数,从而降低网络的能耗以延长网络寿命.  相似文献   

10.
该文在研究缩减伴随式集译码(RLSD)算法和规则QC-LDPC码字结构的基础上,提出了一种新的针对中短QC-LDPC码的BP-RLSD级联译码算法。BP算法译码失败时的软输出对数信息,作为RLSD算法的输入。根据QC-LDPC码所具有的循环置换结构,给出一种根据伴随式的重量来确定候选错误模式搜索空间的算法,同时给出一种查表方法快速搜索出部分错误位置。结合接收序列的低可信度集合(LRIPs),可以实现最大似然(ML)码字的快速搜索。这些方法可以大幅减少计算时间。仿真显示该文提出的算法是有效的。与BP算法的级联译码,可以在计算复杂度和性能之间进行较好的折衷。  相似文献   

11.
The problem of achieving synchronization for variable-length source codes is addressed through the use of self-synchronizing binary prefix-condition codes. Although our codes are suboptimal in the sense of minimum average codeword length, they have the advantages of being generated by an explicit constructive algorithm, having minimal additional redundancy compared with optimal codes-as little as one additional bit introduced into the least likely codeword for a large class of sources-and having statistical synchronizing performance that improves on that of the optimal code in many cases.  相似文献   

12.
张晗  涂巧玲  曹阳  李小红  彭小峰 《红外与激光工程》2019,48(7):722004-0722004(9)
为了提高光通信链路在大气弱湍流信道下的解码性能和传输效率,基于极化码的信息位嵌套特性,设计了一种自适应码率极化码。该码字在弱湍流信道中能充分地极化,纠错效果较好。为了调节码率,引入CRC校验码作为发送端的停止标志,逐次发送更低码率的码字直到译码结果通过校验,此时的码字码率即是保证可靠传输的最大码率。不同湍流强度下的仿真结果表明,在误帧率为10-8时,相比传统极化码,自适应码率极化码可以获得1.7~2.3 dB的性能增益。对自适应码率极化码的时延进行了仿真分析,并结合误帧率得到了自适应码率极化码的信息吞吐率,结果表明,在弱湍流信道中,自适应码率极化码的信息吞吐率能满足FSO的传输需求。  相似文献   

13.
Minimum redundancy coding (also known as Huffman coding) is one of the enduring techniques of data compression. Many efforts have been made to improve the efficiency of minimum redundancy coding, the majority based on the use of improved representations for explicit Huffman trees. In this paper, we examine how minimum redundancy coding can be implemented efficiently by divorcing coding from a code tree, with emphasis on the situation when n is large, perhaps on the order of 10 6. We review techniques for devising minimum redundancy codes, and consider in detail how encoding and decoding should be accomplished. In particular, we describe a modified decoding method that allows improved decoding speed, requiring just a few machine operations per output symbol (rather than for each decoded bit), and uses just a few hundred bytes of memory above and beyond the space required to store an enumeration of the source alphabet  相似文献   

14.
Decoding of linear unequal error protection (LUEP) codes using soft-combining is presented. Codewords are repeated and combined until the desired reliability is obtained. The decoding is performed in the spectral domain, offering reduced implementation complexity. Simulations show a significant performance improvement with soft-combining compared to soft decision decoding  相似文献   

15.
Although the existence of universal noiseless variable-rate codes for the class of discrete stationary ergodic sources has previously been established, very few practical universal encoding methods are available. Efficient implementable universal source coding techniques are discussed in this paper. Results are presented on source codes for which a small value of the maximum redundancy is achieved with a relatively short block length. A constructive proof of the existence of universal noiseless codes for discrete stationary sources is first presented. The proof is shown to provide a method for obtaining efficient universal noiseless variable-rate codes for various classes of sources. For memoryless sources, upper and lower bounds are obtained for the minimax redundancy as a function of the block length of the code. Several techniques for constructing universal noiseless source codes for memoryless sources are presented and their redundancies are compared with the bounds. Consideration is given to possible applications to data compression for certain nonstationary sources.  相似文献   

16.
A novel iterative error control technique based on the threshold decoding algorithm and new convolutional self-doubly orthogonal codes is proposed. It differs from parallel concatenated turbo decoding as it uses a single convolutional encoder, a single decoder and hence no interleaver, neither at encoding nor at decoding. Decoding is performed iteratively using a single threshold decoder at each iteration, thereby providing good tradeoff between complexity, latency and error performance.  相似文献   

17.
李越  张立军  李明齐  朱秋煜 《电视技术》2016,40(12):120-124
针对RaptorQ码解码复杂度高的问题,提出了一种模式选择解码(MSD)算法.该方法结合优化失活解码高斯消元(OIDGE)算法与快速降维解码(DRFD)算法的优点,综合考虑了信道的实际丢包情况与不同解码算法的效率,根据计算所得丢包率,选择合适的解码算法.在嵌入式系统上进行了实验,结果表明,该算法在不同丢包率情况下可以自适应地选择合适的解码算法,提高了RaptorQ码的解码效率.  相似文献   

18.
Variable length codes (VLCs), used in data compression, are very sensitive to error propagation in the presence of noisy channels. To address this problem, several joint sourcechannel turbo techniques have been proposed in the literature. In this paper, we focus on pairs of source/VLC of low redundancy, i.e., when there is a good match between the source statistics and the length distribution of the VLC. It is a case not considered extensively in the literature so far and the classical concatenation of a VLC and a convolutional code is not satisfying. Through EXIT chart and interleaving gain analysis, we show that the introduction of a repetition code between the VLC and the convolutional code considerably improves global performance. In particular, excellent symbol error rates are obtained with reversible VLCs which are used in recent source codecs.  相似文献   

19.
A new lower bound, which is the tightest possible, is obtained for the redundancy of optimal bimuy prefix-condition (OBPC) codes for a memoryless source for which the probability of the most likely source letter is known. It is shown that this bound, and upper bounds obtained by Gallager and Johnsen, hold for infinite as well as finite source alphabets. Also presented are bounds on the redundancy of OBPC codes for sources satisfying the condition that each of the first several probabilities in the list of source probabilities is sufficiently large relative to the sum of the remaining probabilities.  相似文献   

20.
黄胜  曹志雄  郑秀凤 《电讯技术》2021,61(11):1385-1390
在中短码长条件下极化码信道极化不完全,在奇偶校验级联码的译码过程中容易发生错误传播影响译码算法性能.为了降低错误传播对奇偶校验级联性能的影响,设计了一种新型奇偶校验级联方法.该方法通过高斯估计选取部分关键易错信息比特进行非均匀分段校验,能够有效降低错误传播对奇偶校验性能的影响,同时与循环冗余校验级联选择正确路径,可以提升译码算法在大列表和高信噪比条件下的译码性能.仿真表明应用新型级联码相比于CA-SCL(Cyclic-redundancy-check Aided Successive Cancellation List)平均能提升0.1~0.15 dB译码性能.此外,新型级联码结合自适应算法,可以利用译码算法性能的提升使自适应算法在更小列表下译码成功,降低自适应算法在较低信噪比下6%~25%的译码复杂度.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号