首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 343 毫秒
1.
一种低复杂度的MIMO迭代检测算法*   总被引:2,自引:0,他引:2  
为了降低MIMO系统联合检测算法的计算复杂度并保证系统性能,在球形译码算法的搜索中引入软信息构成改进的FP-MAP算法。该算法在计算信息符号的最大似然解时利用了其软先验信息,并在迭代过程中用以获得软外信息。分析与仿真结果表明,在同等的条件下,FP-MAP与LSD-MAP算法相比性能相当,但复杂度有所降低,且与广泛使用的SIC-MMSE算法相比具有明显的性能增益。  相似文献   

2.
针对大规模MIMO中的空时分组编码(STBC)系统的接收机复杂度随天线数指数增长的问题,提出一种基于EM算法的低计算复杂度的最大似然(ML)接收机。利用STBC和OFMD调制的正交性,避免了矩阵求逆运算;采用步骤E与步骤M迭代处理的方式,极大降低接收机复杂度。基于信道特性提出EM算法初始化方法,达到减少迭代次数进而降低接收机复杂度和提升性能的目的。与之前的决策反馈的非迭代接收机相比,该迭代接收机性能显著提高,在具有快衰落的典型无线信道中,其计算复杂度接近ML接收机,适合实时实现。  相似文献   

3.
申东  赵丹  李强  邸敬 《计算机应用研究》2021,38(5):1524-1528
针对信道矩阵维度高以及接收信号复杂的情况,提出了一种适用于大规模MIMO系统上行链路信号检测的混合迭代算法,即结合自适应阻尼雅克比(damped Jacobi,DJ)算法和共轭梯度(conjugate gradient,CG)算法。首先利用CG算法为自适应阻尼雅克比迭代算法提供有效的搜索方向;随后提出切比雪夫方法消除松弛参数对信号检测的影响,在降低算法复杂度的同时加快收敛速度;最后,利用信道编译码中的比特似然比近似求解软信息,以提升检测性能。通过理论分析算法的复杂度,仿真在不同判决方式下对不同检测算法进行误码率对比,并对混合迭代算法的收敛进行了分析。仿真结果表明,混合迭代算法在少量迭代次数下快速收敛并近似达到最佳MMSE检测性能,且算法复杂度远低于MMSE算法。  相似文献   

4.
迫零线性预编码可以获得接近最优的系统容量,不同于传统MIMO系统,大规模MIMO将会配置成百根天线,随着天线数量增加,使得迫零线性预编码矩阵求逆计算复杂,不利于在应用中实现。为了减小线性预编码计算复杂度,提出基于低复杂度的雅克比迭代算法,该算法通过线性迭代,避免了矩阵求逆运算,减少了计算量。为了更进一步的减少计算时间,提出基于统一计算架构的异构多核并行算法,该方法利用GPU具有多核多线程结构特点,实现了异构多核并行计算。仿真结果表明,基于低复杂度雅克比预编码算法可以达到迫零预编码算法性能,同时与传统的线性预编码相比,该算法的计算量更少、时间更短。  相似文献   

5.
为了提高DMR系统基带算法的性能,分析研究了DMR通信协议中的BPTC码和变长BPTC码.针对这两种码,提出了利用和积算法作为软输入软输出译码器,进行迭代译码的新方案.该方案在MATLAB下进行了仿真,并与伴随式译码方案进行了对比分析.BPTC和变长BPTC码的仿真实验结果表明,采用软判决迭代译码的方案在低信噪比和高信噪比时都有更好的编码增益.  相似文献   

6.
大规模多输入多输出(Massive multiple input multiple output, Massive MIMO)系统采用最小均方误差(Minimum mean square error, MMSE)接收检测方法时存在矩阵求逆复杂度高的问题,已有较多降低复杂度的研究。在降低检测算法复杂度的同时,如何提高算法收敛速度和检测性能一直是人们关注的焦点。本文将对称加速超松弛(Symmetric accelerated over-relaxation, SAOR)迭代算法应用于Massive MIMO系统信号检测中,避免了复杂的矩阵求逆计算,实现了复杂度较最小均方误差算法降低了一个数量级。仿真结果表明,基于SAOR的检测方法通过较少的迭代次数就能逼近最小均方误差(Minimum mean square error, MMSE)算法的检测性能,为Massive MIMO系统中接收信号的快速检测提供了较好的实现方法。  相似文献   

7.
针对大规模MIMO系统中因基站天线数与用户数过大导致迫零(ZF)预编码矩阵求逆复杂度较高的问题,提出一种基于迭代子空间投影算法的Lanczos方法低复杂度预编码方案。根据大规模MIMO系统信道矩阵具有对角占优特性,将信道大矩阵求逆诺依曼级数的第1项作为迭代的初始值,从而加快算法的收敛速度,使得ZF预编码的复杂度从O(K~3)降低到O(K~2)。仿真结果表明,该算法以较快的收敛速度逼近传统ZF预编码方案的信道容量与误码率性能。  相似文献   

8.
提出了一种新型基于调制符号的分量进行干扰删除和线性最小均方误差滤波的软输入软输出检测算法,并采用软输入软输出的多入多出MIMO检测器和信道编码串行级联的迭代检测译码IDD结构。该算法充分利用正交调制符号同相分量和正交分量的独立衰落特性,达到检测中更加准确的软干扰删除。外信息转移EXIT图表明该算法比传统的逐符号软删除算法具有更低的临界信噪比。数值仿真也验证了提出的基于调制符号的分量删除的线性检测算法比采用调制符号级的删除具有更优的误码性能,并且仍然具有低复杂度的特性。  相似文献   

9.
将V-BLAST结构和CDMA技术相结合,构造了多用户CDMA系统模型,提出了一种联合多用户检测和Turbo迭代思想的多用户检测算法.该方案首先利用解相关算法对接收信号进行空时多用户检测.以去除用户间干扰.然后再对期望用户的接收信号进行软干扰抵消迭代检测,以去除天线间干扰.解相关后数据检测仅和单个用户的天线数目有关,算法复杂度较低.仿真结果表明,构造的空时多用户模型可使各用户检测相对独立,有利于已有单用户Turbo-BLAST方案的应用.  相似文献   

10.
随着大规模MIMO系统中天线数的增长,获取信道状态信息(channel state information at the transmitter, CSIT)所需的下行信道训练开销和上行反馈开销变得非常巨大。针对信道估计开销过大的问题,提出了一种新的CSIT估计方案和基于低秩矩阵完备的信道估计算法。在本方案中,基站发送训练信号给各个用户之后,用户直接将其观测信号反馈给基站,并在基站端进行统一的CSIT估计。然后,利用大规模MIMO信道矩阵的特点将信道估计问题转化为低秩矩阵完备问题,从而利用软阈值算法恢复出所有用户的信道状态信息。仿真结果表明,该算法可以获得精确的信道状态信息并有效地估计减少了信道估计开销和复杂度。  相似文献   

11.
RNA二级结构预测中动态规划的优化和有效并行   总被引:6,自引:0,他引:6  
谭光明  冯圣中  孙凝晖 《软件学报》2006,17(7):1501-1509
基于最小自由能模型的方法是计算生物学中RNA二级结构预测的主要方法,而计算最小自由能的动态规划算法需要O(n4)的时间,其中n是RNA序列的长度.目前有两种降低时间复杂度的策略:限制二级结构中内部环的大小不超过k,得到O(n2×k2)算法;Lyngso方法根据环的能量规则,不限制环的大小,在O(n3)的时间内获得近似最优解.通过使用额外的O(n)的空间,计算内部环中的冗余计算大为减少,从而在同样不限制环大小的情况下,在O(n3)的时间内能够获得最优解.然而,优化后的算法仍然非常耗时,通过有效的负载平衡方法,在机群系统上实现并行程序.实验结果表明,并行程序获得了很好的加速比.  相似文献   

12.
The paper deals with large transportation problems in which the number of destinations (n) is much larger than the number of origins (m), and in which some or many of the paths from origins to destinations may be inadmissible. Using a new approach, with certain auxilary lists, it is proved that the ordinary simplex algorithm (“Most Negative Rule”) can be performed in O(m2+ m log n) computer operations per iteration, as against O(mn) in the usual approach. A search-in-a-row simplex algorithm (“Row Most Negative Rule”), for which the total number of iterations is probably only somewhat larger, is shown to require just O(m + σm log n) operations per iteration, where σ is the density of the cost matrix (i.e. the proportion of admissible paths). For these rigorous results one needs computer storage which is not considerably larger than that required for storing the cost matrix. For smaller memory an efficient algorithm is also proposed. A general tentative rule for die amount of scanning per iteration is introduced and applied. Computer experiments are reported, confirming theoretical estimates.  相似文献   

13.
The measure of similarity between objects is a very useful tool in many areas of computer science, including information retrieval. SimRank is a simple and intuitive measure of this kind, based on a graph-theoretic model. SimRank is typically computed iteratively, in the spirit of PageRank. However, existing work on SimRank lacks accuracy estimation of iterative computation and has discouraging time complexity. In this paper, we present a technique to estimate the accuracy of computing SimRank iteratively. This technique provides a way to find out the number of iterations required to achieve a desired accuracy when computing SimRank. We also present optimization techniques that improve the computational complexity of the iterative algorithm from O(n 4) in the worst case to min(O(nl), O(n 3/ log2 n)), with n denoting the number of objects, and l denoting the number object-to-object relationships. We also introduce a threshold sieving heuristic and its accuracy estimation that further improves the efficiency of the method. As a practical illustration of our techniques, we computed SimRank scores on a subset of English Wikipedia corpus, consisting of the complete set of articles and category links.  相似文献   

14.
《国际计算机数学杂志》2012,89(3-4):205-226
Ghosh and Bhattacharjee propose [2] (Intern. J. Computer Math., 1984, Vol. 15, pp. 255-268) an algorithm of determining breadth first spanning trees for graphs, which requires that the input graphs contain some vertices, from which every other vertex in the input graph can be reached. These vertices are called starting vertices. The complexity of the GB algorithm is O(log2 n) using O{n 3) processors. In this paper an algorithm, named BREADTH, also computing breadth first spanning trees, is proposed. The complexity is O(log2 n) using O{n 3/logn) processors. Then an efficient parallel algorithm, named- BREADTHFOREST, is proposed, which generalizes algorithm BREADTH. The output of applying BREADTHFOREST to a general graph, which may not contain any starting vertices, is a breadth first spanning forest of the input graph. The complexity of BREADTHFOREST is the same as BREADTH.  相似文献   

15.
We propose a non-iterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3D-to-2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to state-of-the-art methods that are O(n 5) or even O(n 8), without being more accurate. Our method is applicable for all n≥4 and handles properly both planar and non-planar configurations. Our central idea is to express the n 3D points as a weighted sum of four virtual control points. The problem then reduces to estimating the coordinates of these control points in the camera referential, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12×12 matrix and solving a small constant number of quadratic equations to pick the right weights. Furthermore, if maximal precision is required, the output of the closed-form solution can be used to initialize a Gauss-Newton scheme, which improves accuracy with negligible amount of additional time. The advantages of our method are demonstrated by thorough testing on both synthetic and real-data.  相似文献   

16.
Parallel modifications of linear iteration schemes are proposed that are used to solve systems of liner algebraic equations and to achieve the time complexity equal to T = log2 k · O(log2 n), where k is the number of iterations of an original scheme and n is the dimension of a system. Such schemes are extended to the case of approximate solution of systems of linear differential equations with constant coefficients. Based on them and using a program, the stability of solutions in the Lyapunov sense is analyzed.  相似文献   

17.
We study the complexity of, and algorithms to construct, approximations of the union of lines and of the Minkowski sum of two simple polygons. We also studythick unions of lines and Minkowski sums, which are inflated with a small disc. Letb=D/ɛ be the ratio of the diameter of the region of interest and the maximum distance (or error) of the approximation. An approximate union of lines or Minkowski sum has complexity Θ (b 2) in the worst case. The thick union ofn lines has complexity Ω(n+b 2) andO(n +bbn), and thick Minkowski sums have complexity Ω(n 2+b2) andO((n+b)n√blogn+n 2 logn) in the worst case. We present algorithms that run inO(n+n 2/3+δ b 4/3 andO(min(bn,n 4/3+δ b 2/3)) time (any δ>0) for approximate and thick arrangements. For approximate Minkowski sums, the running time isO(min(b 2 n,n 2 +b 2 + (nb)4/3+δ); thick Minkowski sums takeO(n 8/3+δ b 2/3) time to compute.  相似文献   

18.
In this paper, we propose new adaptive algorithms for the extraction and tracking of the least (minor) or eventually, principal eigenvectors of a positive Hermitian covariance matrix. The main advantage of our proposed algorithms is their low computational complexity and numerical stability even in the minor component analysis case. The proposed algorithms are considered fast in the sense that their computational cost is O(np) flops per iteration where n is the size of the observation vector and p<n is the number of eigenvectors to estimate.We consider OJA-type minor component algorithms based on the constraint and non-constraint stochastic gradient technique. Using appropriate fast orthogonalization procedures, we introduce new fast algorithms that extract the minor (or principal) eigenvectors and guarantee good numerical stability as well as the orthogonality of their weight matrix at each iteration. In order to have a faster convergence rate, we propose a normalized version of these algorithms by seeking the optimal step-size. Our algorithms behave similarly or even better than other existing algorithms of higher complexity as illustrated by our simulation results.  相似文献   

19.
In this paper we describe how to apply fine grain parallelism to augmenting path algorithms for the dense linear assignment problem. We prove by doing that the technique we suggest, can be efficiently implemented on commercial available, massively parallel computers. Using n processors, our method reduces the computational complexity from the sequentialO(n 3) to the parallel complexity ofO(n 2). Exhaustive experiments are performed on a Maspar MP-2 in order to determine which of the algorithmic flavors that fits best onto this kind of architecture.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号