首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The problem of low complexity linear programming (LP) decoding of low-density parity-check (LDPC) codes is considered. An iterative algorithm, similar to min-sum and belief propagation, for efficient approximate solution of this problem was proposed by Vontobel and Koetter. In this paper, the convergence rate and computational complexity of this algorithm are studied using a scheduling scheme that we propose. In particular, we are interested in obtaining a feasible vector in the LP decoding problem that is close to optimal in the following sense. The distance, normalized by the block length, between the minimum and the objective function value of this approximate solution can be made arbitrarily small. It is shown that such a feasible vector can be obtained with a computational complexity which scales linearly with the block length. Combined with previous results that have shown that the LP decoder can correct some fixed fraction of errors we conclude that this error correction can be achieved with linear computational complexity. This is achieved by first applying the iterative LP decoder that decodes the correct transmitted codeword up to an arbitrarily small fraction of erroneous bits, and then correcting the remaining errors using some standard method. These conclusions are also extended to generalized LDPC codes.   相似文献   

2.
为了改善Ka频段Turbo-OFDM系统的译码性能,且兼顾信息的传输速度,提出了一种自适应控制Turbo码交织长度的方案.该方案对于给定的目标误比特率,基于Summer算法估计的实时信道SNR特性自适应选择最佳交织长度.仿真结果表明,该方案可以平衡译码性能和译码速度,得到较高的平均译码速度以及较低的平均误比特率.  相似文献   

3.
刘军清  李天昊 《通信学报》2007,28(9):112-118
对信源信道自适应联合编码方法进行了研究,提出了一种新的基于纠错算术码的联合信源信道编解码系统。该系统在编码端利用算术码内嵌禁用符号实现信源信道一体式编码,即利用马尔科夫信源模型和根据信道状态信息自适应地调整禁用符号概率大小从而调整编码码率来实现信道自适应;在解码端,推导出了基于MAP的解码测度数学公式并基于此测度公式提出了一种改进的堆栈序列估计算法。与传统的信道自适应编码算法不同,该自适应编码算法只需调整一个参数:禁用符号,且理论上可获得连续可变的编码码率。实验结果表明,与经典的Grangetto联合编码系统以及分离编码系统相比,所提出的编码系统具有明显改善的性能增益。  相似文献   

4.
为了降低多元低密度奇偶校验(Low-density parity check,LDPC)码Min-max译码算法的运算量,提出一种自适应Min-max(Adaptive Min-max,AMM)译码算法.该方法以Min-max算法为基础,以每次迭代后的校验节点错误率(Check-node Error Rate,CER)为调节参数,采用自适应算法对变量节点的向量长度进行截短,去除置信度较低的分量,仅对置信度较高的分量进行更新.当CER降低到一定程度时,对校验节点个数进行自适应截短,仅对不满足校验方程的校验节点进行消息迭代更新,进一步降低AMM算法的复杂度.仿真结果表明,在相同误码性能条件下,AMM算法运算量较固定长度截短的Min-max算法减少20%.  相似文献   

5.
We show that for low-density parity-check (LDPC) codes whose Tanner graphs have sufficient expansion, the linear programming (LP) decoder of Feldman, Karger, and Wainwright can correct a constant fraction of errors. A random graph will have sufficient expansion with high probability, and recent work shows that such graphs can be constructed efficiently. A key element of our method is the use of a dual witness: a zero-valued dual solution to the decoding linear program whose existence proves decoding success. We show that as long as no more than a certain constant fraction of the bits are flipped by the channel, we can find a dual witness. This new method can be used for proving bounds on the performance of any LP decoder, even in a probabilistic setting. Our result implies that the word error rate of the LP decoder decreases exponentially in the code length under the binary-symmetric channel (BSC). This is the first such error bound for LDPC codes using an analysis based on "pseudocodewords." Recent work by Koetter and Vontobel shows that LP decoding and min-sum decoding of LDPC codes are closely related by the "graph cover" structure of their pseudocodewords; in their terminology, our result implies that that there exist families of LDPC codes where the minimum BSC pseudoweight grows linearly in the block length  相似文献   

6.
针对极化码连续取消列表(SCL)译码算法为获取较好性能而采用较多的保留路径数,导致译码复杂度较高的缺点,自适应SCL译码算法虽然在高信噪比下降低了一定的计算量,却带来了较高的译码延时。根据极化码的顺序译码结构,该文提出了一种分段循环冗余校验(CRC)与自适应选择保留路径数量相结合的SCL译码算法。仿真结果表明,与传统CRC辅助SCL译码算法、自适应SCL译码算法相比,该算法在码率R=0.5时,低信噪比下(–1 dB)复杂度降低了约21.6%,在高信噪比下(3 dB)复杂度降低了约64%,同时获得较好的译码性能。  相似文献   

7.
In this paper, we present an adaptive beamspace focusing technique for the direction of arrival (DOA) estimation of wideband signals. The proposed focusing scheme can perform coherent signal subspace transformation in the beamspace domain without preliminary DOA estimation or iteration. It can maintain low focusing error over a predefined sector-of-interest in the field-of-view (FOV) of the array while adaptively suppressing out-of-sector sources. The beamspace gain outside the sector-of-interest is controlled via additional constraints that provide robustness against moving or suddenly appearing out-of-sector sources. We formulate the adaptive beamspace design problem as a second-order cone program (SOCP) that can be solved efficiently using interior point methods. Numerical simulations are presented showing the superior performance of our approach compared to classical non-adaptive beamspace focusing techniques.  相似文献   

8.
刘星成  王康 《电子与信息学报》2009,31(12):3006-3009
针对分组Turbo码自适应Chase译码算法存在的缺陷,该文提出自适应量化测试序列数的分组Turbo码译码算法。该方法以测试序列数C为研究对象,依出错概率大小选择错误图样,并利用量化测试函数根据SNR的变化对测试序列数进行量化,从而达到直接控制译码复杂度的目的。仿真结果表明,所提出的译码算法保证了译码性能,并直接降低了译码复杂度。  相似文献   

9.
Vector Viterbi Algorithm (VVA) is an optimal decoding algorithm in multiplechannel environments. It can be applied in coded Multi-Input-Multi-Output (MIMO) systems,but its complexity makes its application difficult. This paper proiioses adaptive Threshold VVA (T-VVA) for coded MIMO system. By adaptively choosing a threshold according to Signal to Noise Ratio (SNR) of channel, T-VVA only searches a subset of branches for path extension. The easy method to decide the threshold is also derived based on Cheroff bound. Simulation results show that the near-optimal decoding results can be obtained with largely reduced complexity.  相似文献   

10.
閤大海  李元香  龚文引  何国良 《电子学报》2016,44(10):2535-2542
自适应算子选择方式已被用于差分进化算法求解全局优化问题及多目标优化问题,然而在求解约束优化时难于为自适应算子选择方式找到一种方式来恰当分配信用。为此,本文提出了一种基于混合种群的自适应适应值方式来对约束优化问题中变异策略进行信用分配并采用概率匹配方法自适应选择差分变异策略,同时对算法变异缩放因子与交叉率进行自适应设置提高算法的成功率。实验结果表明算法在求解约束优化问题相比于CODEA/OED, ATMES,εBBO-dm,COMDE 以及εDE算法有较高的收敛精度及收敛速度,同时验证了自适应方式的有效性。该算法可用于预报、质量控制、会计过程等科学和工程应用领域。  相似文献   

11.
李宇 《电视技术》2014,38(3):130-132
针对分布式线性分散码空频编码系统,提出了一种新的自适应编码和译码联合的方案。中继节点根据信道状况自适应选择分散矩阵和调制方式,接收端根据设定的信道条件数阈值自适应选择译码算法,并在COST207典型城市信道模型下进行了MATLAB仿真。仿真结果表明文中所提算法能够有效改善系统的性能。  相似文献   

12.
Linear programming (LP) decoding of low-density parity-check codes over discrete memoryless symmetric channels was introduced by Feldman et al. in [1]. Here, we extend the LP decoding paradigm by applying it to two additional scenarios: joint source-channel (JSC) coding and decoding over the infinitememory non-ergodic binary Polya-contagion channel. Simulation results indicate that the JSC LP decoder yields significant gains over the standard LP decoder for non-uniform sources. Simulations also show that the LP decoder for the Polya channel performs moderately well in relation to the ϵ-capacity limit.  相似文献   

13.
We consider coded data transmission over a binary-input output-symmetric memoryless channel using a binary linear code. In order to understand the performance of maximum-likelihood (ML) decoding, one studies the codewords, in particular the minimal codewords, and their Hamming weights. In the context of linear programming (LP) decoding, one's attention needs to be shifted to the pseudo-codewords, in particular, to the minimal pseudo-codewords and their pseudo-weights. In this paper, we investigate some families of codes that have good properties under LP decoding, namely certain families of low-density parity-check (LDPC) codes that are derived from projective and Euclidean planes: we study the structure of their minimal pseudo-codewords and give lower bounds on their pseudo-weight. Besides this main focus, we also present some results that hold for pseudo-codewords and minimal pseudo-codewords of any Tanner graph, and we highlight how the importance of minimal pseudo-codewords under LP decoding varies depending on which binary-input output-symmetric memoryless channel is used.  相似文献   

14.
Linear programming (LP) decoding is an alternative to iterative algorithms for decoding low density parity check (LDPC) codes. Although the practical performance of LP decoding is comparable to message-passing decoding, a significant advantage is its relative amenability to nonasymptotic analysis. Moreover, there turn out to be a number of important theoretical connections between the LP decoding and standard forms of iterative decoding. These connections allow theoretical insight from the LP decoding perspective to be transferred to iterative decoding algorithms. These advantages encouraged many researchers to work in this recent decoding technique for LDPC codes. In this paper, LP decoding for LDPC code is extensively reviewed and is discussed in different segmented areas.  相似文献   

15.
MPEG-4 is the first visual coding standard that allows coding of scenes as a collection of individual audio-visual objects. We present mathematical formulations for modeling object-based scalability and some functionalities that it brings with it. Our goal is to study algorithms that aid in semi-automating the authoring and subsequent selective addition/dropping of objects from a scene to provide content scalability. We start with a simplistic model for object-based scalability using the "knapsack problem"-a problem for which the optimal object set can be found using known schemes such as dynamic programming, the branch and bound method and approximation algorithms. The above formulation is then generalized to model authoring or multiplexing of scalable objects (e.g., objects encoded at various target bit-rates) using the "multiple choice knapsack problem." We relate this model to several problems that arise in video coding, the most prominent of these being the bit allocation problem. Unlike previous approaches to solve the operational bit allocation problem using Lagrangean relaxation, we discuss an algorithm that solves linear programming (LP) relaxation of this problem. We show that for this problem the duality gap for Lagrange and LP relaxations is exactly the same. The LP relaxation is solved using strong duality with dual descent-a procedure that can be completed in "linear" time. We show that there can be at most two fractional variables in the optimal primal solution and therefore this relaxation can be justified for many practical applications. This work reduces problem complexity, guarantees similar performance, is slightly more generic, and provides an alternate LP-duality based proof for earlier work by Shoham and Gersho (1988). In addition, we show how additional constraints may be added to impose inter-dependencies among objects in a presentation and discuss how object aggregation can be exploited in reducing problem complexity. The marginal analysis approach of Fox (1966) is suggested as a method of re-allocation with incremental inputs. It helps in efficiently re-optimizing the allocation when a system has user interactivity, appearing or disappearing objects, time driven events, etc. Finally, we suggest that approximation algorithms for the multiple choice knapsack problem, which can be used to quantify complexity vs. quality tradeoff at the encoder in a tunable and universal way.  相似文献   

16.
The alternate direction method of multipliers (ADMM) algorithm has recently been proposed for LDPC decoding based on linear programming (LP) techniques. Even though it improves the error rate performance compared with usual message passing (MP) techniques, it shows a higher computation complexity. However, a significant step towards LP LDPC decoding scalability and optimization is made possible since the ADMM algorithm acts as an MP decoding one. In this paper, an overview of the ADMM approach and its error correction performances is provided. Then, its computation and memory complexities are evaluated. Finally, optimized software implementations of the decoder to take advantage of multi/many-core device features are described. Optimization choices are discussed and justified according to execution profiling figures and the algorithm’s parallelism levels. Experimentation results show that this LP based decoding technique can reach WiMAX and WRAN standards real time throughput requirements on mid-range devices.  相似文献   

17.
封宏俊  雷菁  李二保 《信号处理》2017,33(5):766-773
系统极化码具有比非系统极化码更好的误码性能,但目前尚无明确的系统译码算法,因此通常采用非系统译码与再编码级联的方式实现系统极化码的译码,但这会带来极大的译码时延。针对这个问题,本文提出了一种基于翻转序列校验罗列连续消除算法的系统译码方案。该方案具有路径自适应的特性,利用回溯更新过程消除了再编码过程,且通过更新校验交替策略极大降低了资源占用。研究表明,与基于AD-SCL的级联译码方案相比,改进方案能降低50%的资源占用与译码延时,且其误码性能稍有提高。   相似文献   

18.
In this paper, we demonstrate that we can effectively use the results from the field of adaptive self‐organizing data structures in enhancing compression schemes. Unlike adaptive lists, which have already been used in compression, to the best of our knowledge, adaptive self‐organizing trees have not been used in this regard. To achieve this, we introduce a new data structure, the partitioning binary search tree (PBST) which, although based on the well‐known binary search tree (BST), also appropriately partitions the data elements into mutually exclusive sets. When used in conjunction with Fano encoding, the PBST leads to the so‐called Fano binary search tree (FBST), which, indeed, incorporates the required Fano coding (nearly equal probability) property into the BST. We demonstrate how both the PBST and the FBST can be maintained adaptively and in a self‐organizing manner. The updating procedure that converts a PBST into an FBST, and the corresponding new tree‐based operators, namely the shift‐to‐left and the shift‐to‐right operators, are explicitly presented. The encoding and decoding procedures that also update the FBST have been implemented and rigorously tested. Our empirical results on the files of the well‐known benchmarks, the Calgary and Canterbury Corpora, show that the adaptive Fano coding using FBSTs, the Huffman, and the greedy adaptive Fano coding achieve similar compression ratios. However, in terms of encoding/decoding speed, the new scheme is much faster than the latter two in the encoding phase, and they achieve approximately the same speed in the decoding phase. We believe that the same philosophy, namely that of using an adaptive self‐organizing BST to maintain the frequencies, can also be utilized for other data encoding mechanisms, even as the Fenwick scheme has been used in arithmetic coding. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
李越  张立军  李明齐  朱秋煜 《电视技术》2016,40(12):120-124
针对RaptorQ码解码复杂度高的问题,提出了一种模式选择解码(MSD)算法.该方法结合优化失活解码高斯消元(OIDGE)算法与快速降维解码(DRFD)算法的优点,综合考虑了信道的实际丢包情况与不同解码算法的效率,根据计算所得丢包率,选择合适的解码算法.在嵌入式系统上进行了实验,结果表明,该算法在不同丢包率情况下可以自适应地选择合适的解码算法,提高了RaptorQ码的解码效率.  相似文献   

20.
In this work, we propose a novel entropy coding mode decision algorithm to balance the tradeoff between the rate-distortion (R-D) performance and the entropy decoding complexity for the H.264/AVC video coding standard. Context-based adaptive binary arithmetic coding (CABAC), context-based adaptive variable length coding (CAVLC), and universal variable length coding (UVLC) are three entropy coding tools adopted by H.264/AVC. CABAC can be used to encode the texture and the header data while CAVLC and UVLC are employed to encode the texture and the header data, respectively. Although CABAC can provide better R-D performance than CAVLC/UVLC, its decoding complexity is higher. Thus, by taking the entropy decoding complexity into account, CABAC may not be the best tool, which motivates us to examine the entropy coding mode decision problem in depth. It will be shown experimentally that the proposed mode decision algorithm can help the encoder generate the bit streams that can be decoded at much lower complexity with little R-D performance loss.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号