共查询到19条相似文献,搜索用时 140 毫秒
1.
2.
3.
4.
本文主要研究在城镇场景中,基于LTE的V2V通信系统在通信过程中的资源分配算法,搭建了系统级LTE-based V2V仿真平台,对V2V系统资源分配算法进行研究.同时对比动态调度算法,主要提出了基于单播、多播的半静态调度算法.仿真结果表明,在单播和多播下的半静态资源分配算法具有资源利用率高,系统吞吐量大等优点. 相似文献
5.
在LTE系统中,调度功能由位于e Node B的调度器完成,包括下行调度器和上行调度器,分别负责管理下行和上行链路的资源分组调度。相对于由RNC控制资源分组调度的WCDMA系统而言,LTE系统可以及时根据信道情况自适应改变调制方式和编码速率,并减少系统的传输延迟。本文重点研究了TD-LTE系统的下行分组调度算法,在分析经典分组调度算法及其评价标准的基础上,给出了一种新的基于Qo S参数的分组调度算法——RAD算法。该方案的主要目的是按照实际网络中各用户的需求,优先完成QCI等级高的用户的数据传输,以尽量减少重传次数,同时兼顾公平原则。 相似文献
6.
本文主要研究在城镇场景中,基于LTE的V2V通信系统在通信过程中的资源分配算法,搭建了系统级LTE-based V2V仿真平台,对V2V系统资源分配算法进行研究。同时对比动态调度算法,主要提出了基于单播、多播的半静态调度算法。仿真结果表明,在单播和多播下的半静态资源分配算法具有资源利用率高,系统吞吐量大等优点。 相似文献
7.
在以OFDM为核心技术的3G长期演进(LTE)系统中,动态资源分配机制是提高无线频谱利用率、保证边缘用户性能的有效手段之一。首先,根据LTE系统的技术特点指出了动态资源分配机制与现有蜂窝移动通信系统资源分配不同的三个特点。然后,从调度和功率控制两个方面总结了动态资源分配机制的研究内容与研究现状。 相似文献
8.
文章首先讨论了LTE动态资源分配机制与现有蜂窝移动通信系统资源分配不同的三个特点,然后从调度和功率控制两个方面总结了动态资源分配机制的研究内容与研究现状。 相似文献
9.
10.
从无线资源分配的角度出发,提出了一种频选调度方案,旨在充分利用LTE系统频谱资源的前提下提高小区的吞吐量.频选调度方案设计的MAC层调度器可以按照UE上报的信道质量指示,将可以使用的资源分为不同的优先级,然后把最好的资源分配给相应的UE,实现频选调度策略.通过系统级仿真可知,该方案能提高用户的性能. 相似文献
11.
12.
LTE系统中定义了信道探测参考信号,用于基站估计上行链路的信道质量。基站基于上行SRS信道估计得到上行信道信息,将获得的信道信息输入调度器作为其上行调度的主要依据。本文详细给出了基于LTE SRS的信道估计算法,并对其中的噪声及信噪比估计进行重点讨论,最后通过仿真分析了多种典型信道条件下信噪比估计的准确性。 相似文献
13.
长期演进(Long Term Evolution,LTE)已经成为4G无线技术标准。目前,LTE分组调度的下行链路调度被大多数研究者研究,上行链路的研究相对较少。针对上行链路调度无法保证实时业务分组在延迟期限内传输,存在公平性较差、分组丢弃多的问题。因此,提出了一种新的上行链路调度算法。该算法根据实时业务的延迟约束条件建立目标整数线性规划模型,再根据目标整数线性规划模型进行调度。实验结果表明,该算法能保证实时业务分组在延迟期限内传输,适用于实时业务,能确保公平性,最小化分组丢弃,具有较好的适用性。 相似文献
14.
介绍了LTE系统中上行数据组合过程的原理与特点,主要分析了LTE技术中逻辑信道优先级中采用的令牌桶算法,论述了逻辑信道资源分配机制与上行数据复用过程采用的方案。针对LTE技术中的逻辑信道复用与上行数据组装过程,提出了一种改进的可用于实现的数据组装方案。最后总结了LTE系统中逻辑信道映射的优势,根据现有方案实现的代码进行测试,验证方案的正确性。 相似文献
15.
《Vehicular Technology, IEEE Transactions on》2006,55(5):1565-1581
In this paper, a hybrid resource management system for the uplink in Universal Mobile Telecommunications System wideband code-division multiple access (WCDMA) with components in the Node-B and user equipment (UE) has been proposed. A rate scheduler in the client focuses on average packet delays as a means of abstracting application-specific requirements from the rest of the resource management scheme. It controls uplink transmission through variable spreading gain to optimize resource usage while meeting target delays. Service change requests from the distributed rate schedulers are collectively processed through interservice and intraservice priority queuing in a manner that is shown to exhibit fairness in allocation of resources when cumulative load exceeds system capacity. The performance of the proposed algorithm is explored through discrete-event simulations for three classes of traffic, namely voice, video, and data, over the WCDMA uplink in the presence of short-term Rayleigh fading, automatic repeat request, forward error correction, target transmission delays to meet the respective quality of service, and frame error rate targets in a “multicell” environment. The authors analyze two alternatives for distributed resource management with the UE or Node-B in control of rate scheduling and observe the fairness in resource allocation of both systems. Priority of speech, video, and data traffic is respected and reflected in 95th percentile transmission delays for heavily loaded systems. 相似文献
16.
17.
A LTE uplink scheduling scheme matching the features of wireless cloud services was proposed for the SDWN (software-defined wireless network). The scheme first solved the resource allocation problem by using the binary integer programming method, and then calculated the optimal transmission rate of cloud services in each time slot using the method of dynamic programming, finally adjusted the transmission rate of cloud services proportional to the current channel status using QoS control method in the framework of SDWN. The proposed scheme minimizes the energy con-sumption of cloud services while ensuring the transmission rate demand of multiple services. The performance of the al-gorithm is verified by simulation. 相似文献
18.
19.
Scheduling procedures implemented in wireless networks consists of varied workflows such as resource allocation, channel gain improvement, and reduction in packet arrival delay. Among these techniques, Long term evolution (LTE) scheduling is most preferable due to its high speed communication and low bandwidth consumption. LTE allocates resources to the workflow based on time and frequency domains. Normally, the information gathered prior to scheduling increases the processing time since each attributes of the users have to be verified. In order to solve this issue, parallel processing via data mining is analyzed in recent research studies. The label that is assigned to the user attributes contributes primarily on scheduling time slots effectively. The label assignment and parallel processing via data mining reduces the delay and increases the throughput respectively. Additionally, the matched data extraction from the library and the prediction of available channels with fewer dimensions posed major challenges in the LTE scheduling. This paper surveys about various LTE scheduling algorithms, dimensionality reduction techniques, optimal feature selection techniques, multi-level classification techniques, and data mining combined with LTE techniques. A brief survey illustrates the impact of each technique on 3G/4G networks, channel availability prediction, scheduling of time slots in detail. A brief comparison of the techniques involved in the respective LTE processes via tabular form reveals that the verification of channel and user availability are the primary functions of the LTE scheduling. The survey of this paper identifies the limitations such as computational complexity and poor scheduling performance in the existing systems and encourages researchers to develop novel algorithms for LTE scheduling. 相似文献