首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
IRA码的译码通常是利用BP译码算法来实现的,但是BP译码算法的硬件电路复杂.为了让译码算法在复杂度和译码性能之间取得较好的折衷,提出一种改进型IRA译码算法,该算法采用偏移量近似的方法来逼近于BP译码算法,能够简化BP译码算法的复杂度.仿真结果表明,与BP译码算法相比,改进型IRA译码算法能够在降低算法复杂度的同时保持良好的译码性能,与最小和译码算法相比,改进型IRA译码算法的复杂度几乎不变,但译码性能得到了明显的提高.  相似文献   

2.
为了提高短低密度校验码(Short LDPC)的纠错性能,在研究盒匹配译码算法(BMA)和置信度与分阶统计译码级联算法(BP-OSD)的基础上,提出了一种新的针对短LDPC码译码的BP-BMA级联算法.该算法充分利用了BMA算法具有较低译码复杂度的特性.然后,利用该算法结合对数似然比累积(ALLR)算法进行了计算机仿真.仿真结果表明:BP-BMA级联算法与BP-OSD相比,译码性能有所提高,且译码复杂度大大降低了,在译码性能和复杂度间取得了很好的折中.  相似文献   

3.
Mackay-Neal算法是基于LDPC码的BP译码简化算法,但仍存在大量乘法运算.为了降低译码算法的运算量,基于Mackay-Neal算法提出一种改进的对数和积译码算法.最后通过计算量复杂度分析结果表明,改进后的对数和积译码算法更简单,运算量大大降低,易于硬件的实现.  相似文献   

4.
包志祥  吕娜  陈柯帆 《计算机应用》2015,35(6):1541-1545
不规则重复累积码(IRA)的译码通常采用置信传播(BP)译码算法,然而BP译码算法需进行双曲正切函数计算,复杂度高,不利于硬件实现。为此,提出一种基于分段函数修正和预检测机制结合的译码算法,通过对折线近似译码算法进行非均匀误差补偿,使其性能接近BP算法;同时引入预检测机制对校验节点信息传递进行预检测,判断出对后续迭代影响微小的对数似然信息,并将其移出迭代循环,从而减少计算量。仿真结果表明,该算法通过分段函数修正近似双曲正切函数、引入预检测机制能大大降低运算复杂度,并且具有接近BP算法的译码性能。  相似文献   

5.
LT码的BP译码算法复杂度较高,在译码时由于Tanner图短环的出现易产生震荡效应。为此,提出一种软比特域迭代译码算法。将双曲正切函数进行变换和量化处理,得到(-1,1)区间的软比特域,并将变量节点信息更新算法变换到软比特域中进行计算。为解决LT码中短环的存在导致某些变量节点的外信息出现震荡效应的问题,给出一种新的震荡判断准则,只有当变量节点在连续2次迭代时符号发生反转,且软比特域值均高于阈值时判定为出现震荡。仿真结果表明,简化软比特域震荡迭代译码算法约比传统BP算法降低75%的运算量,并在误码率性能上逼近BP算法。  相似文献   

6.
Chase2算法是Turbo乘积码(TPC)软判决译码中常采用的算法之一。由于传统的Chase2算法中欧氏距离计算以及寻找竞争码字都需要大量的运算,因而在硬件上实现比较复杂。为此,在传统Chase2算法的基础上,采用相关度量等价替代欧氏距离的度量,简化寻找竞争码字的过程,以降低译码复杂度;调整竞争码字不存在时的软输出信息值,以提高编码增益。仿真结果表明:改进算法比传统的Chase算法译码速度更快,译码性能更好,非常适合硬件实现。   相似文献   

7.
运用LLR BP经典算法对低密度奇偶校验(LDPC)码译码时,由于译码时迭代次数过多和每次循环时校验节点的计算复杂度过高,导致译码复杂度非常高.提出了一种改进型LLR BP译码算法,采用泰勒级数将LLR BP算法中复杂度高的雅克比修正项进行分段线性近似.仿真表明:该算法在译码性能损失不大的情况下可大幅降低LDPC码的译码复杂度.  相似文献   

8.
基于信道估计的LDPC仿真算法研究   总被引:1,自引:0,他引:1  
研究有记忆信道上的LDPC译码算法,对高速数字通信系统具有重要意义.目前运用于有记忆信道上的LDPC迭代译码算法,如基于信道估计的BP迭代译码算法等,都存在算法复杂度较高、运算量较大的问题.针对隐马尔可夫噪声信道,首次将最小和(min-sum)算法引入到基于噪声软判决和信道估计的LDPC迭代译码算法,利用函数特性有效降低算法复杂度、减少运算量.仿真结果表明,此算法的性能不仅优于不考虑信道记忆特性的一般LDPC的迭代译码算法,也优于基于噪声硬判决和信道估计的BP迭代译码算法,在性能损失较小情况下,于译码性能和算法复杂度之间找到了一个很好的折衷,对实时通信系统具有重要意义.  相似文献   

9.
并行级联分组码基于相关运算的叠加反馈译码   总被引:1,自引:0,他引:1  
彭万权 《计算机仿真》2009,26(6):348-351
并行级联分组码和串行级联分组码均可实现基于LLR计算的Turbo迭代译码,但前者具有更高的码率.将接收信息与子译码器的输出软信息进行线性叠加反馈能在省去繁琐的LLR计算的情况下实现并行级联分组码的Turbo迭代译码,仅通过对译码器的输出进行简单的相关运算以及对Chase2译码算法进行适当的改进便可获得接近LLR算法的译码性能.仿真研究验证了算法的有效性.  相似文献   

10.
在IEEE802.16e通信标准的LDPC码背景下,基于LDPC码的软判决LLR BP译码算法,结合LDPC码的最小和处理方式和硬判决译码思想,针对译码性能和复杂程度提出了一种改进的BP译码算法。在相同信噪比条件下,新BP算法在译码性能上非常接近LLR BP算法,同时其复杂程度却远小于LLR BP算法,提高了工程可实现性。  相似文献   

11.
多视图子空间聚类作为处理多视图数据的聚类算法,其目的在于学习到一个共识的子空间后用于聚类。但是,现存的多视图子空间聚类算法只是将目标放在了原有的多个视图上,忽略了通过特征直连得到的数据。提出的FSMC算法使原有的多个视图与特征直连视图相互学习,通过误差重构和结构化约束子空间得到一个更加合适的子空间表示,同时还考虑了多视图与特征直连视图的权重关系。最后,在4个基准数据集上进行实验,验证了算法的有效性。  相似文献   

12.
本文提出了基于改进型粒子群优化的BP网络学习算法。在该算法中,首先改进了传统的BP算法,有效地使得网络中输入层、隐含层和输出层结点个数达到一个最优解。然后,用粒子群优化算法替代了传统BP算法中的梯度下降法,使得改进后的算法具有不易陷入局部极小、泛化性能好等特点,并将该算法应用在了股票预测的应用设计中。结果证明明:该算法能够明显减少迭代次数,提高收敛精度,其泛化性能也优于传统BP算法。  相似文献   

13.
We propose a novel homogeneous neural network ensemble approach called Generalized Regression Neural Network (GEFTS–GRNN) Ensemble for Forecasting Time Series, which is a concatenation of existing machine learning algorithms. GEFTS uses a dynamic nonlinear weighting system wherein the outputs from several base-level GRNNs are combined using a combiner GRNN to produce the final output. We compare GEFTS with the 11 most used algorithms on 30 real datasets. The proposed algorithm appears to be more powerful than existing ones. Unlike conventional algorithms, GEFTS is effective in forecasting time series with seasonal patterns.  相似文献   

14.
置信传播(BP)算法作为极化码最常用的软判决输出译码算法之一,具有并行传输、高吞吐量等优点,但其存在收敛较慢、运算复杂度高等缺陷。提出一种基于循环神经网络的偏移最小和近似置信传播译码算法。通过偏移最小和近似算法替代乘法运算,修改迭代过程中的消息更新策略,并运用改进的循环神经网络架构实现参数共享。仿真结果表明,相比传统BP译码算法,该译码算法在提升误码率(BER)性能的前提下,减少约75%的加法运算且收敛速度大幅提升,相比基于深度神经网络的BP译码算法,该算法在确保BER性能无显著下降的前提下,使用加法运算替代乘法运算,节省了约80%的存储空间开销。  相似文献   

15.
Location awareness is now becoming a vital requirement for many practical applications. In this paper, we consider passive localization of multiple targets with one transmitter and several receivers based on time of arrival (TOA) measurements. Existing studies assume that positions of receivers are perfectly known. However, in practice, receivers' positions might be inaccurate, which leads to localization error of targets. We propose factor graph (FG)-based belief propagation (BP) algorithms to locate the passive targets and improve the position accuracy of receivers simultaneously. Due to the nonlinearity of the likelihood function, messages on the FG cannot be derived in closed form. We propose both sample-based and parametric methods to solve this problem. In the sample-based BP algorithm, particle swarm optimization is employed to reduce the number of particles required to represent messages. In parametric BP algorithm, the nonlinear terms in messages are linearized, which results in closed-form Gaussian message passing on FG. The Bayesian Cramér–Rao bound (BCRB) for passive targets localization with inaccurate receivers is derived to evaluate the performance of the proposed algorithms. Simulation results show that both the sample-based and parametric BP algorithms outperform the conventional method and attain the proposed BCRB. Receivers' positions can also be improved via the proposed BP algorithms. Although the parametric BP algorithm performs slightly worse than the sample-based BP method, it could be more attractive in practical applications due to the significantly lower computational complexity.  相似文献   

16.
To meet the requirements of big data processing, this paper presents an efficient mapping scheme for a fully connected multilayered neural network, which is trained by using back-propagation (BP) algorithm based on Map-Reduce of cloud computing clusters. The batch-training (or epoch-training) regimes are used by effective segmentation of samples on the clusters, and are adopted in the separated training method, weight summary to achieve convergence by iterating. For a parallel BP algorithm on the clusters and a serial BP algorithm on an uniprocessor, the required time for implementing the algorithms is derived. The performance parameters, such as speedup, optimal number and minimum of data nodes are evaluated for the parallel BP algorithm on the clusters. Experiment results demonstrate that the proposed parallel BP algorithm in this paper has better speedup, faster convergence rate, less iterations than that of the existed algorithms.  相似文献   

17.
准确估计航班保障服务时间可以极大提高地面航班保障服务效率。采用主成分分析(PCA)方法降低变量间的相关性,考虑到BP神经网络的网络结构难以确定,且网络初始权重、阈值随机,提出改进的遗传算法来优化BP神经网络的结构,初始权重、阈值,建立自适应多层遗传算法(AMGA)的BP神经网络航班保障服务时间估计模型。为验证所提AMGA-BP算法的性能,以国内某枢纽机场航班保障服务时间作为研究对象,与传统的GA-BP、BP两种算法做对比实验,进行航班保障服务时间估计,实验结果表明,AMGA-BP算法比BP算法和GA-BP算法精确度更高。  相似文献   

18.
Error back-propagation (BP) is one of the most popular ideas used in learning algorithms for multilayer neural networks. In BP algorithms, there are two types of learning schemes, online learning and batch learning. The online BP has been applied to various problems in practice, because of its simplicity of implementation. However, efficient implementation of the online BP usually requires an ad hoc rule for determining the learning rate of the algorithm. In this paper, we propose a new learning algorithm called SPM, which is derived from the successive projection method for solving a system of nonlinear inequalities. Although SPM can be regarded as a modification of online BP, the former algorithm determines the learning rate (step-size) adoptively based on the output for each input pattern. SPM may also be considered a modification of the globally guided back-propagation (GGBP) proposed by Tang and Koehler. Although no theoretical proof of the convergence for SPM is given, some simulation results on pattern classification problems indicate that SPM is more effective and robust than the standard online BP and GGBP  相似文献   

19.
网络文本主题词的提取与组织研究   总被引:3,自引:0,他引:3  
网络信息的指数爆炸给人们获取与掌控信息带来了困扰,为了挖掘海量信息中的关键因子并以恰当的方式进行组织,本文设计了网络文本主题词提取和组织算法。该算法基于多级滤噪的切分词拼接,利用特定的噪音库与滤噪策略严格控制拼接过程,在合理收录策略的挑选下,算法提取出了能够准确反映海量网络数据中关键因子的主题词串。为清晰地组织主题词,建立主题词与网络事件的有机联系,设计了新的词聚类策略对主题词提取结果进行处理,使表达同一热点的主题词合理地组织在一起,共同描述同一事件。在以实际网络文本为语料的实验中,算法表现出令人满意的性能。  相似文献   

20.
This paper is concerned with covariance intersection (CI) fusion for multi-sensor linear time-varying systems with unknown cross-covariance. Firstly, a CI fusion weighted by diagonal matrix (DCI) is proposed, and it is proved to be unbiased and robust and has higher accuracy than classical CI fusion. Secondly, the genetic simulated annealing (GSA) algorithm is used for multi-parameter optimization problem caused by diagonal matrix weights. Considering the serious time-consuming problem in optimization process of the GSA algorithm, Back Propagation (BP) network is used to obtain the optimal weights. Eventually, the DCI based on GSA algorithm and BP network is proposed. The proposed algorithm has higher accuracy and better stability than classic CI fusion algorithms. Simulation analyses verify the effectiveness and correctness of the conclusion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号