共查询到20条相似文献,搜索用时 15 毫秒
1.
近年来,机器学习技术在网络管理领域得到了广泛使用。然而由于通信网络日益复杂,网络中的非线性和不确定因素使得机器学习变得十分困难。为了提升机器学习的效果,提出了一种采用随机映射的人工神经网络方案,其特点是引入机器学习和网络拓扑的随机性,使得神经网络对学习目标具有更大的适应性,并实现更快、更精确的收敛。相关成果已经在中国移动通信集团山西有限公司(以下简称山西移动)的实际网络中得到了应用并取得较好的效果。 相似文献
2.
电信运营商有大量数据,但是鉴于多种原因,数据的质量不够理想,出现大量数据不完整甚至缺失。对于已有数据的挖掘,必须在数据满足质量要求且达到足够采样比例的前提下开展。依托现有的全国日志留存系统,设计完整数据的模板样库,鉴别不能满足质量要求的数据,使用随机森林算法,找到最符合的相同或相关数据,补全数据并提升数据质量;用回溯反馈的方法优化并扩充模板样库。在全国日志留存系统中构建数据补全子系统,实现端到端的数据质量保障和提升,补全并改善历史数据甚至实时数据的质量,最终满足数据处理和挖掘的要求,提升运营商数据质量和价值。 相似文献
3.
为了得到一种实用性较强且具有较高精度的大学英语四级通过率的预测模型,本文尝试将随机森林模型应用到大学英语四级通过率预测中,以学生基本情况(性别、民族、专业)、高考英语成绩、大学英语成绩(共计4学期)、大学生课外英语使用统计数据为输入变量,以通过英语四级和未通过英语四级作为分类变量,建立基于随机森林预测模型.实验结果表明... 相似文献
4.
Aditya Sai Srinivas T. Ramasubbareddy Somula Govinda K. Akriti Saxena Pramod Reddy A. 《International Journal of Communication Systems》2020,33(13)
The precision of forecasting rainfall is vital owing to current world climate change. As deterministic weather forecasting models are usually time consuming, it becomes challenging to efficiently use this large volume of data in hand. Machine learning methods are already proven to be good replacement for traditional deterministic approaches in weather prediction. This paper presents an approach using recurrent neural networks (RNN) and long short term memory (LSTM) techniques to improve the rainfall forecast performance. This will be compared with the random forest classifier and XGBoost as well. The goal is to predict a set of hourly rainfall levels from sequences of weather radar measurements. Python libraries are utilized to forecast the time series data. The training set comprises of data from first 20 days of every month and the inference set data from the continuing days. This makes sure that both train and inference sets are more or less independent. The idea resides in implementing an end‐to‐end learning framework. 相似文献
5.
Nisha Kandhoul Sanjay K. Dhurandher Isaac Woungang 《International Journal of Communication Systems》2021,34(1):e4646
Designing a safe and reliable way for communicating the messages among the devices and humans forming the Opportunistic Internet of Things network (OppIoT) has been a challenge since the broadcast mode of message sharing is used. To contribute toward addressing such challenge, this paper proposes a Random Forest Classifier (RFC)‐based safe and reliable routing protocol for OppIoT (called RFCSec) which ensures space efficiency, hash‐based message integrity, and high packet delivery, simultaneously protecting the network against safety threats viz. packet collusion, hypernova, supernova, and wormhole attacks. The proposed RFCSec scheme is composed of two phases. In the first one, the RFC is trained on real data trace, and based on the output of this training, the second phase consists in classifying the encountered nodes of a given node as belonging to one of the output classes of nodes based on their past behavior in the network. This helps in proactively isolating the malicious nodes from participating in the routing process and encourages the participation of the ones with good message forwarding behavior, low packet dropping rate, high buffer availability, and a higher probability of delivering the messages in the past. Simulation results using the ONE simulator show that the proposed RFCSec secure routing scheme is superior to the MLProph, RLProph, and CAML routing protocols, chosen as benchmarks, in terms of legitimate packet delivery, probability of message delivery, count of dropped messages, and latency in packet delivery. The out‐of‐bag error obtained is also minimal 相似文献
6.
7.
Despite the capacity of conjugated materials for enhanced power conversion efficiency (PCE) of organic photovoltaics (OPV), a comprehensive survey of unexplored materials is beyond the reach of most researchers’ resources. In such instances, a data-driven approach using machine learning (ML) is an efficient alternative; however, bridging the gap between experimental observations and data science requires a number of refinements. In this investigation, using a random forest model based on an experimental dataset, a high correlation coefficient of 0.85 is achieved for the ML of polymer and non-fullerene small molecule acceptor OPVs and performed virtual screening of 200,932 conjugated polymers generated by the combinatorial coupling of donor and acceptor units. Further, to evaluate the effectiveness of the ML model, a series of conjugated polymers (based on benzodithiophene and thiazolothiazole) were designed, synthesized, and characterized with different alkyl chains. Among these, PBDTTzEH:IT-4F showed a PCE of 10.10%, which is in good correspondence with ML predictions with respect to the choice of alkyl chains. Thus, the current study demonstrates how ML can be utilized for developing OPVs using a relatively small number of experimental data points (566) and screening numerous molecular structures. 相似文献
8.
蜂窝移动通信环境复杂多变,在基站和移动台之间不可避免会出现电波的非视距(Non-Line-of-Sight,NLOS)传播,使基站和移动台之间的距离测量误差显著增大,导致定位性能急剧下降。为了准确识别出视距(Line-of-Sight,LOS)与非视距传播的基站信号,提出了一种基于随机森林的LOS/NLOS基站识别方法,通过分析移动台与各基站接收机测量距离与定位误差之间的相关性,选择LOS/NLOS测量距离作为特征进行分类器训练,再将分类器用于LOS/NLOS基站的识别。仿真结果表明,该方法对NLOS基站的正确识别率达到90%以上,能取得较好的定位性能。 相似文献
9.
基于移动蜂窝网络技术的定位方案是提供网络优化、紧急救援、公安巡警和位置服务等应用的重要技术途径之一。传统的基于小区基站位置信息的定位方案定位精度低、定位误差大,无法满足某些定位应用需求。基于指纹定位的方案能够在基于小区粗定位方案基础上大幅度提升定位精度、节约计算成本、增强适用性,成为定位研究的热点。针对室外指纹定位的业务需求,深入研究分析了两种基于机器学习的栅格化和非栅格化室外指纹定位方案。通过参数加权、数据拟合等方法对于大规模指纹数据进行了清洗,提高数据源的有效性。通过划定研究区域、栅格化、构建指纹数据库、训练模型、修正模型、非栅格化、粗定位耦合、匹配参数、训练参数等子模块的实现,分析和优化了算法的运行效率和定位精度,确定了影响算法性能的关键指标。进而结合仿真结果,分析了两种基于指纹的定位方案的性能。最后介绍了基于机器学习的指纹定位方案在实际应用中的典型场景。 相似文献
10.
11.
12.
Gurpal Singh Chhabra Varinderpal Singh Maninder Singh 《International Journal of Communication Systems》2018,31(15)
With an exponential increase in the data size and complexity of various documents to be investigated, existing methods of network forensics are found not much efficient with respect to accuracy and detection ratio. The existing techniques for network forensic analysis exhibit inherent limitations while processing a huge volume, variety, and velocity of data. It makes network forensic a time‐consuming and resource‐consuming task. To balance time taken and output delivered, these existing techniques put a limit on the amount of data under analysis, which results in a polynomial time complexity of these solutions. So to mitigate these issues, in this paper, we propose an effective framework to overcome the limitation to handle large volume, variety, and velocity of data. An architectural setup that consists of MapReduce framework on top of Hadoop Distributed File System environment is proposed in this paper. The proposed framework demonstrates its capability to handle issues of storage and processing of big data using cloud computing. Also, in the proposed framework, supervised machine learning (random forest‐based decision tree) algorithm has been implemented to demonstrate better sensitivity. To train and validate the model, online available data set from CAIDA is taken and university network traffic samples, with increasing size, has been taken for experiment. Results thus obtained confirm the superiority of the proposed framework in network forensics, with an average accuracy of 99.34% (malicious and nonmalicious traffic). 相似文献
13.
ABSTRACT Outdoor positioning systems based on the Global Navigation Satellite System have several shortcomings that have deemed their use for indoor positioning impractical. Location fingerprinting, which utilizes machine learning, has emerged as a viable method and solution for indoor positioning due to its simple concept and accurate performance. In the past, shallow learning algorithms were traditionally used in location fingerprinting. Recently, the research community started utilizing deep learning methods for fingerprinting after witnessing the great success and superiority these methods have over traditional/shallow machine learning algorithms. This paper provides a comprehensive review of deep learning methods in indoor positioning. First, the advantages and disadvantages of various fingerprint types for indoor positioning are discussed. The solutions proposed in the literature are then analyzed, categorized, and compared against various performance evaluation metrics. Since data is key in fingerprinting, a detailed review of publicly available indoor positioning datasets is presented. While incorporating deep learning into fingerprinting has resulted in significant improvements, doing so, has also introduced new challenges. These challenges along with the common implementation pitfalls are discussed. Finally, the paper is concluded with some remarks as well as future research trends. 相似文献
14.
Named entity recognition (NER) continues to be an important task in natural language processing because it is featured as a subtask and/or subproblem in information extraction and machine translation. In Urdu language processing, it is a very difficult task. This paper proposes various deep recurrent neural network (DRNN) learning models with word embedding. Experimental results demonstrate that they improve upon current state‐of‐the‐art NER approaches for Urdu. The DRRN models evaluated include forward and bidirectional extensions of the long short‐term memory and back propagation through time approaches. The proposed models consider both language‐dependent features, such as part‐of‐speech tags, and language‐independent features, such as the “context windows” of words. The effectiveness of the DRNN models with word embedding for NER in Urdu is demonstrated using three datasets. The results reveal that the proposed approach significantly outperforms previous conditional random field and artificial neural network approaches. The best f‐measure values achieved on the three benchmark datasets using the proposed deep learning approaches are 81.1%, 79.94%, and 63.21%, respectively. 相似文献
15.
Machine‐type communication (MTC) has attracted much attention due to the fact that it provides pervasive connections for billions of MTC devices, forming a basis for the Internet of things. Most works in the literatures on machine‐to‐machine (M2M) communications focused on media access layer (MAC) layer or other upper layer applications, such as e‐health, energy management and entertainment services. On the other hand, physical (PHY) layer plays a pivotal role in M2M communications. To accommodate a large number of MTC devices, M2M should be made efficient enough in terms of its power consumption and spectrum utilisation. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
16.
17.
提升网络感知和客户满意度一直是网络优化的工作主线,而KPI指标无法反映网络真实感知情况,传统通过调研了解客户满意度的方式存在很大局限性。本文深入研究了KPI指标和网络真实感知的映射关系,通过大数据挖掘和机器学习建模实现了感知权重因子的量化,以此为基础完成了一种基于机器学习的网络感知评估方法,为客户满意度提升工作提供了全新的分析思路和支撑手段。 相似文献
18.
Zhiping Jin Zhibiao Liang Meirong He Yao Peng Hanxiao Xue Yu Wang 《International Journal of Network Management》2023,33(3):e2222
The classification of network traffic, which involves classifying and identifying the type of network traffic, is the most fundamental step to network service improvement and modern network management. Classic machine learning and deep learning methods have widely adopted in the field of network traffic classification. However, there are two major challenges in practice. One is the user privacy concern in cross-domain traffic data sharing for the purpose of training a global classification model, and the other is the difficulty to obtain large amount of labeled data for training. In this paper, we propose a novel approach using federated semi-supervised learning for network traffic classification, in which the federated server and clients from different domains work together to train a global classification model. Among them, unlabeled data are used on the client side, and labeled data are used on the server side. The experimental results derived from a public dataset show that the accuracy of the proposed approach can reach 97.81%, and the accuracy gap between the federated learning approach and the centralized training method is minimal. 相似文献
19.
Supporting a huge number of machine‐to‐machine devices with different priorities in Long Term Evolution networks is addressed in this paper. We propose a learning automaton (LA)–based scheme for dynamically allocating random access resources to different classes of machine‐to‐machine devices according to their priorities and their demands in each cycle. We then use another LA‐based scheme to adjust the barring factor for each class to control the possible overload. We show that by appropriate updating procedure for these LAs, the system performance asymptotically converges to the optimal performance in which the evolved node B knows the number of access‐attempting devices from each class a priori. Simulation results are provided to show the performance of the proposed scheme in random access resource allocation to defined classes and adjusting the barring factor for each of them. 相似文献
20.
随着信息数据传输内容的不断丰富,信息传输方式的多样化,在网络安全管理中,需要改变传统网络安全管理以被动管理为主的模式,强化网络攻击的主动检测能力.基于Spark的随机森林算法在目前的理论研究中,已经取得了较为深入的研究成果,同时由于其在实际运行中所具有的优势,在应用于网络入侵检测中,具有较好的实用效果.本文对这方面的研... 相似文献