首页 | 本学科首页   官方微博 | 高级检索  
     

融合多种时空自注意力机制的Transformer交通流预测模型
引用本文:曹威,王兴,邹复民,金彪,王小军.融合多种时空自注意力机制的Transformer交通流预测模型[J].计算机系统应用,2024,33(4):82-92.
作者姓名:曹威  王兴  邹复民  金彪  王小军
作者单位:福建师范大学 计算机与网络空间安全学院, 福州 350117;福建师范大学 数字福建大数据安全技术研究所, 福州 350117;福建理工大学 福建省汽车电子与电驱动技术重点实验室, 福州 350118
基金项目:福建省科技厅对外合作项目(2020I0014)
摘    要:交通流预测是智能交通系统中实现城市交通优化的一种重要方法,准确的交通流量预测对交通管理和诱导具有重要意义.然而,因交通流本身存在高度时空依赖性而表现出复杂的非线性特征,现有的方法主要考虑路网中节点的局部时空特征,忽略了路网中所有节点的长期时空特征.为了充分挖掘交通流数据复杂的时空依赖,提出一种融合多种时空自注意力机制的Transformer交通流预测模型(MSTTF).该模型在嵌入层通过位置编码嵌入时间和空间信息,并在注意力机制层融合邻接空间自注意力机制,相似空间自注意力机制,时间自注意力机制,时间-空间自注意力机制等多种自注意力机制挖掘数据中潜在的时空依赖关系,最后在输出层进行预测.结果表明, MSTTF模型与传统时空Transformer相比, MAE平均降低了10.36%.特别地,相比于目前最先进的PDFormer模型, MAE平均降低了1.24%,能取得更好的预测效果.

关 键 词:交通流预测  智能交通  时空依赖性  Transformer  自注意力机制
收稿时间:2023/10/8 0:00:00
修稿时间:2023/11/9 0:00:00

Transformer Traffic Flow Prediction Model Integrating Multiple Spatiotemporal Self-attention Mechanisms
CAO Wei,WANG Xing,ZOU Fu-Min,JIN Biao,WANG Xiao-Jun.Transformer Traffic Flow Prediction Model Integrating Multiple Spatiotemporal Self-attention Mechanisms[J].Computer Systems& Applications,2024,33(4):82-92.
Authors:CAO Wei  WANG Xing  ZOU Fu-Min  JIN Biao  WANG Xiao-Jun
Affiliation:College of Computer and Cyber Security, Fujian Normal University, Fuzhou 350117, China;Digital Fujian Institute of Big Data Security Technology, Fujian Normal University, Fuzhou 350117, China;Fujian Key Laboratory of Automotive Electronics and Electric Drive, Fujian University of Technology, Fuzhou 350118, China
Abstract:Traffic flow prediction is an important method for achieving urban traffic optimization in intelligent transportation systems. Accurate traffic flow prediction holds significant importance for traffic management and guidance. However, due to the high spatiotemporal dependence, the traffic flow exhibits complex nonlinear characteristics. Existing methods mainly consider the local spatiotemporal features of nodes in the road network, overlooking the long-term spatiotemporal characteristics of all nodes in the network. To fully explore the complex spatiotemporal dependencies in traffic flow data, this study proposes a Transformer-based traffic flow prediction model called multi-spatiotemporal self-attention Transformer (MSTTF). This model embeds temporal and spatial information through position encoding in the embedding layer and integrates various self-attention mechanisms, including adjacent spatial self-attention, similar spatial self-attention, temporal self-attention, and spatiotemporal self-attention, to uncover potential spatiotemporal dependencies in the data. The predictions are made in the output layer. The results demonstrate that the MSTTF model achieves an average reduction of 10.36% in MAE compared to the traditional spatiotemporal Transformer model. Particularly, when compared to the state-of-the-art PDFormer model, the MSTTF model achieves an average MAE reduction of 1.24%, indicating superior predictive performance.
Keywords:traffic flow prediction  intelligent transportation  spatial-temporal dependence  Transformer  self-attention mechanism
点击此处可从《计算机系统应用》浏览原始摘要信息
点击此处可从《计算机系统应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号