首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
石峰  耿烜 《电讯技术》2017,57(11):1295-1300
为了降低超密集网络中基站管理算法的计算复杂度并提升基站的能源使用效率,根据用户密度、网络负载量等信息,提出了一种基于分簇的动态管理基站算法.该算法首先根据用户测量报告计算出理论最小需求基站数,然后对基站进行合理的网络分簇,最终通过粒子群优化算法确定基站休眠组合.仿真结果表明,与未进行分簇的基站管理算法相比,该算法可以降低约60%的计算复杂度,并能有效降低基站能源消耗.  相似文献   

2.
王珺  葛万成 《通信技术》2011,(9):48-50,53
这里提出了1种新的基站空域协作干扰抑制方案,该方案根据用户受到干扰的不同情况,动态地对相邻小区基站进行划分成簇.同时对所提出的方案在不同预编码算法和不同发送天线数目的条件下用Matlab软件进行仿真验证,其中,预编码算法的选择需要根据系统性能需求和实现的复杂度进行权衡.仿真结果表明所提出的动态协作簇小区间干扰抑制方案可...  相似文献   

3.
雷俊  周春晖  肖立民  石明军  姚彦 《通信技术》2010,43(3):68-69,111
为了能够以较小的实现复杂度有效减少多天线蜂窝系统中的小区间干扰,同时保证系统中的用户公平性,提出一种新的多小区联合调度算法,提出的算法的复杂度远远低于最优多小区联合调度算法。该算法将系统分簇并在每个簇内按一定比例选择部分平均速率较低的用户,之后在簇内按照给定的优化目标进行联合调度。仿真结果表明:与单小区调度算法相比,提出的多小区联合调度算法可以显著增加系统平均速率,而且可以取得与单小区调度算法类似的用户公平性。  相似文献   

4.
在能量受限的通信系统中,能量效率是衡量系统性能的关键指标.本文研究由一个基站和多个分簇用户组成的无线供电混合非正交多址接入系统.在该网络中,基站通过无线能量传输方式给用户供能,用户则利用收集到的能量向基站传输各自的信息.为降低基站的接收解码复杂度,用户采用分簇的方式进行信息传输:簇间用户的信息传输采用时分多址方式,而簇内用户的信息传输采用非正交多址方式.通过联合分配能量传输与信息传输的时间长度以及控制基站和用户的发射功率来实现网络能量效率的最大化.由于涉及的优化问题是非凸的,本文先通过寻找问题最优解的结构,然后根据分式规划理论,提出了一种新的迭代资源分配算法来求解该问题.仿真结果表明,与"吞吐量最大化策略"和 "固定时间分配策略"两种基准策略相比,所提出的算法显著提高了网络的能量效率.  相似文献   

5.
针对多用户OFDMA系统,提出一种改进的比例公平资源分配算法。该算法采用分步法联合资源分配方案,分为子载波分配和功率分配两步进行,改进算法基于Wong算法,引入比例限制因子,在子载波分配环节,对剩余子载波的分配采用比例公平和最大化系统容量的算法,在用户中的功率分配环节,采用线性等式,大大降低了算法复杂度。仿真分析表明,改进算法不仅提高了系统吞吐量,而且降低了算法复杂度。  相似文献   

6.
超密集网络(UDNs)拉近了终端与节点间的距离,使得网络频谱效率大幅度提高,扩展了系统容量,但是小区边缘用户的性能严重下降。合理规划的虚拟小区(VC)只能降低中等规模UDNs的干扰,而重叠基站下的用户的干扰需要协作用户簇的方法来解决。该文提出了一种干扰增量降低(IIR)的用户分簇算法,通过在簇间不断交换带来最大干扰的用户,最小化簇内的干扰和,最终最大化系统和速率。该算法在不提高K均值算法的复杂度的同时,不需要指定簇首,避免陷入局部最优。仿真结果表明,网络密集部署时,有效提高系统和速率,尤其是边缘用户的吞吐量。  相似文献   

7.
干扰对齐技术可以有效地提高干扰信道容量和抑制干扰,其问题归结为求得闭式解.传统的干扰对齐方案根据信道的互易性,采用迭代的方法(分布式)来逼近闭式解,但带来了较大的计算开销.本文在K用户MIMO干扰信道系统中提出一种低复杂度的分布式干扰对齐算法,通过对用户K的干扰协方差矩阵进行排序QR分解算法,该分解算法是基于修正的Gram-Schmidt方法,选出酉矩阵Q最后的以列作为用户近似的干扰抑制滤波矩阵,和传统分布式干扰对齐方案相比,在基本保持系统容量和迭代收敛速度的同时,系统的复杂度明显降低.仿真结果验证了算法的有效性.  相似文献   

8.
对于采用大规模MIMO技术的时分双工系统,天线互易误差会破坏上下行信道互易特性,大幅降低预编码算法下行传输性能.由于实际系统难以完全消除天线互易误差,该文以最大化各用户平均信泄噪比为目标,根据天线互易误差的统计特性,设计了对该误差具有鲁棒性的线性预编码算法.同时为了进一步降低用户接收端的等效噪声功率,该文还将该线性鲁棒预编码算法扩展为基于矢量扰动的非线性鲁棒预编码算法,并通过减格辅助技术降低其扰动矢量求解复杂度,使其更适用于大规模MIMO系统应用.计算机仿真结果表明在存在基站天线互易误差条件下,该文所提出的线性与非线性鲁棒性预编码算法的性能均优于传统预编码算法的性能.  相似文献   

9.
为了降低超密集网络中小区间的干扰,提升频谱效率,给出一种在以用户为中心的可重叠虚拟小区场景下,基于边权重和贪婪树增长(Greedy Tree Growing Algorithm,GTGA)算法的用户分簇方案.考虑到每个用户对其他用户产生干扰的同时,又受到其他用户的干扰,权重设计采用协作传输的平衡策略.针对用户分簇,改进的K-means聚类算法通过能够拟合高斯分布的权重统计量来动态调整用户分群的大小.仿真结果表明,所提算法能有效地降低复杂度,减少干扰,提高超密集网络的频谱效率.  相似文献   

10.
为了消除多用户MIMO下行系统的多用户间干扰以及改进系统的误码率性能,研究了块对角化预编码与几何均值分解的联合方案(BD-GMD).针对BD-GMD系统的资源分配和用户调度,对比了等功率分配和注水算法对系统的影响,并基于系统容量最大化,提出了一种根据用户信道的子空间特性的低复杂度的用户调度算法.此外,对比分析了穷举搜索算法和传统的贪婪算法.数值仿真结果表明,文中所提出的基于正交投影的多用户调度算法充在保证系统容量的同时降低了算法复杂度.  相似文献   

11.
The problem of low complexity linear programming (LP) decoding of low-density parity-check (LDPC) codes is considered. An iterative algorithm, similar to min-sum and belief propagation, for efficient approximate solution of this problem was proposed by Vontobel and Koetter. In this paper, the convergence rate and computational complexity of this algorithm are studied using a scheduling scheme that we propose. In particular, we are interested in obtaining a feasible vector in the LP decoding problem that is close to optimal in the following sense. The distance, normalized by the block length, between the minimum and the objective function value of this approximate solution can be made arbitrarily small. It is shown that such a feasible vector can be obtained with a computational complexity which scales linearly with the block length. Combined with previous results that have shown that the LP decoder can correct some fixed fraction of errors we conclude that this error correction can be achieved with linear computational complexity. This is achieved by first applying the iterative LP decoder that decodes the correct transmitted codeword up to an arbitrarily small fraction of erroneous bits, and then correcting the remaining errors using some standard method. These conclusions are also extended to generalized LDPC codes.   相似文献   

12.
无参考信号条件下基于MSWF的DOA估计算法   总被引:1,自引:0,他引:1       下载免费PDF全文
刘红明  何子述  夏威  程婷  李军 《电子学报》2010,38(9):1979-1983
现有基于多级维纳滤波(MSWF)的子空间法波达方向(DOA)估计算法复杂度较低,但需要先验的参考信号.论文从MSWF求解线性预测问题入手,将基于MSWF的线性预测和子空间两种不同的DOA估计方法结合起来,提出了一种实用的低复杂度DOA估计算法.该算法无需构造专门的参考信号,在低信噪比或信源数估计不准的情况下,算法依然具有较好的稳健性和估计性能.仿真实验验证了本文的结论.  相似文献   

13.
We investigate the structure of the polytope underlying the linear programming (LP) decoder introduced by Feldman, Karger, and Wainwright. We first show that for expander codes, every fractional pseudocodeword always has at least a constant fraction of nonintegral bits. We then prove that for expander codes, the active set of any fractional pseudocodeword is smaller by a constant fraction than that of any codeword. We further exploit these geometrical properties to devise an improved decoding algorithm with the same order of complexity as LP decoding that provably performs better. The method is very simple: it first applies ordinary LP decoding, and when it fails, it proceeds by guessing facets of the polytope, and then resolving the linear program on these facets. While the LP decoder succeeds only if the ML codeword has the highest likelihood over all pseudocodewords, we prove that the proposed algorithm, when applied to suitable expander codes, succeeds unless there exists a certain number of pseudocodewords, all adjacent to the ML codeword on the LP decoding polytope, and with higher likelihood than the ML codeword. We then describe an extended algorithm, still with polynomial complexity, that succeeds as long as there are at most polynomially many pseudocodewords above the ML codeword.  相似文献   

14.
We consider the decoding problem for low-density parity-check codes, and apply nonlinear programming methods. This extends previous work using linear programming (LP) to decode linear block codes. First, a multistage LP decoder based on the branch-and-bound method is proposed. This decoder makes use of the maximum-likelihood-certificate property of the LP decoder to refine the results when an error is reported. Second, we transform the original LP decoding formulation into a box-constrained quadratic programming form. Efficient linear-time parallel and serial decoding algorithms are proposed and their convergence properties are investigated. Extensive simulation studies are performed to assess the performance of the proposed decoders. It is seen that the proposed multistage LP decoder outperforms the conventional sum-product (SP) decoder considerably for low-density parity-check (LDPC) codes with short to medium block length. The proposed box-constrained quadratic programming decoder has less complexity than the SP decoder and yields much better performance for LDPC codes with regular structure.  相似文献   

15.
The radar signal sorting method based on traditional support vector clustering (SVC) algorithm takes a high time complexity, and the traditional validity index cannot efficiently indicate the best sorting result. Aiming at solving the problem, we study a new sorting method based on cone cluster labeling (CCL) method. The CCL method relies on the theory of approximate coverings both in feature space and data space. Also a new cluster validity index, similitude entropy (SE), is proposed. It can be used to evaluate the compactness and separation of clusters with information entropy theory. Simulations including the performance comparison between the proposed method and the conventional methods are presented. Results show that while maintaining the sorting accuracy, the proposed method can reduce the computing complexity effectively in sorting the signals.  相似文献   

16.
熊伟  廖巍  张帆  景宁  陈宏盛 《电子学报》2006,34(6):1069-1073
内务处理是用来优化空间连接处理提炼步骤的I/O代价,属于NP-难的问题,现有求解方法复杂度太高.本文将页面聚簇和聚簇调度问题分别归约为图的k划分和最长路径问题,提出了改进的遗传算法和基于最大生成树的近似算法进行求解.对经典遗传算法的进行了修正以满足页面聚簇划分约束.近似算法的解大于最优解的一半.理论分析和仿真实验验证了算法的可行性和有效性.  相似文献   

17.
针对K-means算法对于初始聚类中心选择敏感问题,提出了一种改进的K-means算法,该算法优化了聚类中心选择问题,能够获得全局最优的聚类划分,同时减少了算法的时间复杂度。实验结果表明,采用本文的算法进行网络入侵检测,相对于经典的聚类算法,能获得理想的网络入侵检测率和网络误报率。  相似文献   

18.
MPEG-4 is the first visual coding standard that allows coding of scenes as a collection of individual audio-visual objects. We present mathematical formulations for modeling object-based scalability and some functionalities that it brings with it. Our goal is to study algorithms that aid in semi-automating the authoring and subsequent selective addition/dropping of objects from a scene to provide content scalability. We start with a simplistic model for object-based scalability using the "knapsack problem"-a problem for which the optimal object set can be found using known schemes such as dynamic programming, the branch and bound method and approximation algorithms. The above formulation is then generalized to model authoring or multiplexing of scalable objects (e.g., objects encoded at various target bit-rates) using the "multiple choice knapsack problem." We relate this model to several problems that arise in video coding, the most prominent of these being the bit allocation problem. Unlike previous approaches to solve the operational bit allocation problem using Lagrangean relaxation, we discuss an algorithm that solves linear programming (LP) relaxation of this problem. We show that for this problem the duality gap for Lagrange and LP relaxations is exactly the same. The LP relaxation is solved using strong duality with dual descent-a procedure that can be completed in "linear" time. We show that there can be at most two fractional variables in the optimal primal solution and therefore this relaxation can be justified for many practical applications. This work reduces problem complexity, guarantees similar performance, is slightly more generic, and provides an alternate LP-duality based proof for earlier work by Shoham and Gersho (1988). In addition, we show how additional constraints may be added to impose inter-dependencies among objects in a presentation and discuss how object aggregation can be exploited in reducing problem complexity. The marginal analysis approach of Fox (1966) is suggested as a method of re-allocation with incremental inputs. It helps in efficiently re-optimizing the allocation when a system has user interactivity, appearing or disappearing objects, time driven events, etc. Finally, we suggest that approximation algorithms for the multiple choice knapsack problem, which can be used to quantify complexity vs. quality tradeoff at the encoder in a tunable and universal way.  相似文献   

19.
Detectability of failures of linear programming (LP) decoding and the potential for improvement by adding new constraints motivate the use of an adaptive approach in selecting the constraints for the underlying LP problem. In this paper, we make a first step in studying this method, and show that by starting from a simple LP problem and adaptively adding the necessary constraints, the complexity of LP decoding can be significantly reduced. In particular, we observe that with adaptive LP decoding, the sizes of the LP problems that need to be solved become practically independent of the density of the parity-check matrix. We further show that adaptively adding extra constraints, such as constraints based on redundant parity checks, can provide large gains in the performance.   相似文献   

20.
The alternate direction method of multipliers (ADMM) algorithm has recently been proposed for LDPC decoding based on linear programming (LP) techniques. Even though it improves the error rate performance compared with usual message passing (MP) techniques, it shows a higher computation complexity. However, a significant step towards LP LDPC decoding scalability and optimization is made possible since the ADMM algorithm acts as an MP decoding one. In this paper, an overview of the ADMM approach and its error correction performances is provided. Then, its computation and memory complexities are evaluated. Finally, optimized software implementations of the decoder to take advantage of multi/many-core device features are described. Optimization choices are discussed and justified according to execution profiling figures and the algorithm’s parallelism levels. Experimentation results show that this LP based decoding technique can reach WiMAX and WRAN standards real time throughput requirements on mid-range devices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号