共查询到20条相似文献,搜索用时 218 毫秒
1.
为了降低超密集网络中基站管理算法的计算复杂度并提升基站的能源使用效率,根据用户密度、网络负载量等信息,提出了一种基于分簇的动态管理基站算法.该算法首先根据用户测量报告计算出理论最小需求基站数,然后对基站进行合理的网络分簇,最终通过粒子群优化算法确定基站休眠组合.仿真结果表明,与未进行分簇的基站管理算法相比,该算法可以降低约60%的计算复杂度,并能有效降低基站能源消耗. 相似文献
2.
这里提出了1种新的基站空域协作干扰抑制方案,该方案根据用户受到干扰的不同情况,动态地对相邻小区基站进行划分成簇.同时对所提出的方案在不同预编码算法和不同发送天线数目的条件下用Matlab软件进行仿真验证,其中,预编码算法的选择需要根据系统性能需求和实现的复杂度进行权衡.仿真结果表明所提出的动态协作簇小区间干扰抑制方案可... 相似文献
3.
4.
在能量受限的通信系统中,能量效率是衡量系统性能的关键指标.本文研究由一个基站和多个分簇用户组成的无线供电混合非正交多址接入系统.在该网络中,基站通过无线能量传输方式给用户供能,用户则利用收集到的能量向基站传输各自的信息.为降低基站的接收解码复杂度,用户采用分簇的方式进行信息传输:簇间用户的信息传输采用时分多址方式,而簇内用户的信息传输采用非正交多址方式.通过联合分配能量传输与信息传输的时间长度以及控制基站和用户的发射功率来实现网络能量效率的最大化.由于涉及的优化问题是非凸的,本文先通过寻找问题最优解的结构,然后根据分式规划理论,提出了一种新的迭代资源分配算法来求解该问题.仿真结果表明,与"吞吐量最大化策略"和 "固定时间分配策略"两种基准策略相比,所提出的算法显著提高了网络的能量效率. 相似文献
5.
6.
超密集网络(UDNs)拉近了终端与节点间的距离,使得网络频谱效率大幅度提高,扩展了系统容量,但是小区边缘用户的性能严重下降。合理规划的虚拟小区(VC)只能降低中等规模UDNs的干扰,而重叠基站下的用户的干扰需要协作用户簇的方法来解决。该文提出了一种干扰增量降低(IIR)的用户分簇算法,通过在簇间不断交换带来最大干扰的用户,最小化簇内的干扰和,最终最大化系统和速率。该算法在不提高K均值算法的复杂度的同时,不需要指定簇首,避免陷入局部最优。仿真结果表明,网络密集部署时,有效提高系统和速率,尤其是边缘用户的吞吐量。 相似文献
7.
干扰对齐技术可以有效地提高干扰信道容量和抑制干扰,其问题归结为求得闭式解.传统的干扰对齐方案根据信道的互易性,采用迭代的方法(分布式)来逼近闭式解,但带来了较大的计算开销.本文在K用户MIMO干扰信道系统中提出一种低复杂度的分布式干扰对齐算法,通过对用户K的干扰协方差矩阵进行排序QR分解算法,该分解算法是基于修正的Gram-Schmidt方法,选出酉矩阵Q最后的以列作为用户近似的干扰抑制滤波矩阵,和传统分布式干扰对齐方案相比,在基本保持系统容量和迭代收敛速度的同时,系统的复杂度明显降低.仿真结果验证了算法的有效性. 相似文献
8.
对于采用大规模MIMO技术的时分双工系统,天线互易误差会破坏上下行信道互易特性,大幅降低预编码算法下行传输性能.由于实际系统难以完全消除天线互易误差,该文以最大化各用户平均信泄噪比为目标,根据天线互易误差的统计特性,设计了对该误差具有鲁棒性的线性预编码算法.同时为了进一步降低用户接收端的等效噪声功率,该文还将该线性鲁棒预编码算法扩展为基于矢量扰动的非线性鲁棒预编码算法,并通过减格辅助技术降低其扰动矢量求解复杂度,使其更适用于大规模MIMO系统应用.计算机仿真结果表明在存在基站天线互易误差条件下,该文所提出的线性与非线性鲁棒性预编码算法的性能均优于传统预编码算法的性能. 相似文献
9.
为了降低超密集网络中小区间的干扰,提升频谱效率,给出一种在以用户为中心的可重叠虚拟小区场景下,基于边权重和贪婪树增长(Greedy Tree Growing Algorithm,GTGA)算法的用户分簇方案.考虑到每个用户对其他用户产生干扰的同时,又受到其他用户的干扰,权重设计采用协作传输的平衡策略.针对用户分簇,改进的K-means聚类算法通过能够拟合高斯分布的权重统计量来动态调整用户分群的大小.仿真结果表明,所提算法能有效地降低复杂度,减少干扰,提高超密集网络的频谱效率. 相似文献
10.
为了消除多用户MIMO下行系统的多用户间干扰以及改进系统的误码率性能,研究了块对角化预编码与几何均值分解的联合方案(BD-GMD).针对BD-GMD系统的资源分配和用户调度,对比了等功率分配和注水算法对系统的影响,并基于系统容量最大化,提出了一种根据用户信道的子空间特性的低复杂度的用户调度算法.此外,对比分析了穷举搜索算法和传统的贪婪算法.数值仿真结果表明,文中所提出的基于正交投影的多用户调度算法充在保证系统容量的同时降低了算法复杂度. 相似文献
11.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》2009,55(11):4835-4859
12.
13.
Dimakis A.G. Gohari A.A. Wainwright M.J. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2009,55(8):3479-3487
We investigate the structure of the polytope underlying the linear programming (LP) decoder introduced by Feldman, Karger, and Wainwright. We first show that for expander codes, every fractional pseudocodeword always has at least a constant fraction of nonintegral bits. We then prove that for expander codes, the active set of any fractional pseudocodeword is smaller by a constant fraction than that of any codeword. We further exploit these geometrical properties to devise an improved decoding algorithm with the same order of complexity as LP decoding that provably performs better. The method is very simple: it first applies ordinary LP decoding, and when it fails, it proceeds by guessing facets of the polytope, and then resolving the linear program on these facets. While the LP decoder succeeds only if the ML codeword has the highest likelihood over all pseudocodewords, we prove that the proposed algorithm, when applied to suitable expander codes, succeeds unless there exists a certain number of pseudocodewords, all adjacent to the ML codeword on the LP decoding polytope, and with higher likelihood than the ML codeword. We then describe an extended algorithm, still with polynomial complexity, that succeeds as long as there are at most polynomially many pseudocodewords above the ML codeword. 相似文献
14.
《Selected Areas in Communications, IEEE Journal on》2006,24(8):1603-1613
We consider the decoding problem for low-density parity-check codes, and apply nonlinear programming methods. This extends previous work using linear programming (LP) to decode linear block codes. First, a multistage LP decoder based on the branch-and-bound method is proposed. This decoder makes use of the maximum-likelihood-certificate property of the LP decoder to refine the results when an error is reported. Second, we transform the original LP decoding formulation into a box-constrained quadratic programming form. Efficient linear-time parallel and serial decoding algorithms are proposed and their convergence properties are investigated. Extensive simulation studies are performed to assess the performance of the proposed decoders. It is seen that the proposed multistage LP decoder outperforms the conventional sum-product (SP) decoder considerably for low-density parity-check (LDPC) codes with short to medium block length. The proposed box-constrained quadratic programming decoder has less complexity than the SP decoder and yields much better performance for LDPC codes with regular structure. 相似文献
15.
Multiple-Parameter Radar Signal Sorting Using Support Vector Clustering and Similitude Entropy Index
Zhanling Wang Dengfu Zhang Duyan Bi Shiqiang Wang 《Circuits, Systems, and Signal Processing》2014,33(6):1985-1996
The radar signal sorting method based on traditional support vector clustering (SVC) algorithm takes a high time complexity, and the traditional validity index cannot efficiently indicate the best sorting result. Aiming at solving the problem, we study a new sorting method based on cone cluster labeling (CCL) method. The CCL method relies on the theory of approximate coverings both in feature space and data space. Also a new cluster validity index, similitude entropy (SE), is proposed. It can be used to evaluate the compactness and separation of clusters with information entropy theory. Simulations including the performance comparison between the proposed method and the conventional methods are presented. Results show that while maintaining the sorting accuracy, the proposed method can reduce the computing complexity effectively in sorting the signals. 相似文献
16.
17.
针对K-means算法对于初始聚类中心选择敏感问题,提出了一种改进的K-means算法,该算法优化了聚类中心选择问题,能够获得全局最优的聚类划分,同时减少了算法的时间复杂度。实验结果表明,采用本文的算法进行网络入侵检测,相对于经典的聚类算法,能获得理想的网络入侵检测率和网络误报率。 相似文献
18.
MPEG-4 is the first visual coding standard that allows coding of scenes as a collection of individual audio-visual objects. We present mathematical formulations for modeling object-based scalability and some functionalities that it brings with it. Our goal is to study algorithms that aid in semi-automating the authoring and subsequent selective addition/dropping of objects from a scene to provide content scalability. We start with a simplistic model for object-based scalability using the "knapsack problem"-a problem for which the optimal object set can be found using known schemes such as dynamic programming, the branch and bound method and approximation algorithms. The above formulation is then generalized to model authoring or multiplexing of scalable objects (e.g., objects encoded at various target bit-rates) using the "multiple choice knapsack problem." We relate this model to several problems that arise in video coding, the most prominent of these being the bit allocation problem. Unlike previous approaches to solve the operational bit allocation problem using Lagrangean relaxation, we discuss an algorithm that solves linear programming (LP) relaxation of this problem. We show that for this problem the duality gap for Lagrange and LP relaxations is exactly the same. The LP relaxation is solved using strong duality with dual descent-a procedure that can be completed in "linear" time. We show that there can be at most two fractional variables in the optimal primal solution and therefore this relaxation can be justified for many practical applications. This work reduces problem complexity, guarantees similar performance, is slightly more generic, and provides an alternate LP-duality based proof for earlier work by Shoham and Gersho (1988). In addition, we show how additional constraints may be added to impose inter-dependencies among objects in a presentation and discuss how object aggregation can be exploited in reducing problem complexity. The marginal analysis approach of Fox (1966) is suggested as a method of re-allocation with incremental inputs. It helps in efficiently re-optimizing the allocation when a system has user interactivity, appearing or disappearing objects, time driven events, etc. Finally, we suggest that approximation algorithms for the multiple choice knapsack problem, which can be used to quantify complexity vs. quality tradeoff at the encoder in a tunable and universal way. 相似文献
19.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》2008,54(12):5396-5410
20.
Imen Debbabi Bertrand Le Gal Nadia Khouja Fethi Tlili Christophe Jégo 《Journal of Signal Processing Systems》2018,90(11):1551-1567
The alternate direction method of multipliers (ADMM) algorithm has recently been proposed for LDPC decoding based on linear programming (LP) techniques. Even though it improves the error rate performance compared with usual message passing (MP) techniques, it shows a higher computation complexity. However, a significant step towards LP LDPC decoding scalability and optimization is made possible since the ADMM algorithm acts as an MP decoding one. In this paper, an overview of the ADMM approach and its error correction performances is provided. Then, its computation and memory complexities are evaluated. Finally, optimized software implementations of the decoder to take advantage of multi/many-core device features are described. Optimization choices are discussed and justified according to execution profiling figures and the algorithm’s parallelism levels. Experimentation results show that this LP based decoding technique can reach WiMAX and WRAN standards real time throughput requirements on mid-range devices. 相似文献