首页 | 本学科首页   官方微博 | 高级检索  
     

基于时空特征自适应融合网络的流量分类方法
引用本文:杨宇,唐东明,李驹光,肖宇峰. 基于时空特征自适应融合网络的流量分类方法[J]. 电子测量技术, 2024, 47(3): 166-174
作者姓名:杨宇  唐东明  李驹光  肖宇峰
作者单位:西南科技大学信息工程学院
基金项目:国家自然科学基金(12175187)项目资助;
摘    要:针对当前网络流量瞬时涌现导致网络安全事故骤增、网络管理负担加重等问题,基于深度学习技术提出了ResNet和一维Vision Transformer并行的网络结构对网络流量进行识别并分类。其中ResNet可以提取到流量数据在空间上深层次的特征,能够保证流量识别的准确率;一维Vision Transformer可以提取到更具代表性的时序特征。利用注意力机制将两种特征进行自适应融合得到更全面的特征表示,以提高网络识别流量的能力。在ISCX VPN-nonVPN数据集上进行实验表明:所提方法在流量的应用程序分类实验中的准确率达到了99.5%,相较于单独的ResNet和一维Vision Transformer以及经典的一维CNN和CNN+长短时记忆网络分别提高了0.9%、3.6%、6.6%和3.3%。在USTC-TFC 2016数据集上,所提方法在能够轻松识别流量是否为恶意流量的基础上,实现了对13种应用程序的分类,且平均分类准确率达到了98.92%,证明了其具有识别恶意流量并完成细粒度分类任务的能力。

关 键 词:流量分类  ResNet  vision Transformer  多头注意力机制  特征融合

Traffic classification based on spatiotemporal feature adaptive fusion network
Yang Yu,Tang Dongming,Li Juguang,Xiao Yufeng. Traffic classification based on spatiotemporal feature adaptive fusion network[J]. Electronic Measurement Technology, 2024, 47(3): 166-174
Authors:Yang Yu  Tang Dongming  Li Juguang  Xiao Yufeng
Affiliation:School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China
Abstract:In response to the current surge in network traffic leading to a sudden increase in network security incidents and an added burden on network management, a network architecture based on deep learning techniques has been proposed. This architecture involves the parallel use of ResNet and one-dimensional Vision Transformer for the identification and classification of network traffic. ResNet is capable of extracting deep spatial features from flow data, ensuring high accuracy in traffic recognition. Meanwhile, the one-dimensional Vision Transformer excels at capturing more representative temporal features. By employing an attention mechanism to adaptively merge these two types of features, a more comprehensive feature representation is obtained to enhance the network′s capability in traffic identification. Experiments conducted on the ISCX VPN-nonVPN dataset demonstrate that the proposed method achieves an accuracy of 99.5% in application-based traffic classification experiments. Compared to standalone ResNet and one-dimensional Vision Transformer, as well as classical one-dimensional Convolutional Neural Networks (1D-CNN) and CNN combined with Long Short-Term Memory (CNN+LSTM), the proposed method shows improvements of 0.9%, 3.6%, 6.6%, and 3.3%, respectively. On the USTC-TFC 2016 dataset, the proposed method not only easily identifies malicious traffic but also accomplishes the classification of 13 different applications, with an average classification accuracy of 98.92%. This proves its ability to recognize malicious traffic and perform fine-grained classification tasks.
Keywords:traffic classification;ResNet;vision Transformer;multi-head attention mechanism;feature fusion
点击此处可从《电子测量技术》浏览原始摘要信息
点击此处可从《电子测量技术》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号