首页 | 本学科首页   官方微博 | 高级检索  
     

基于多模态信息融合的时间序列预测模型
引用本文:吴明晖,张广洁,金苍宏.基于多模态信息融合的时间序列预测模型[J].计算机应用,2022,42(8):2326-2332.
作者姓名:吴明晖  张广洁  金苍宏
作者单位:浙大城市学院 计算机与计算科学学院,杭州 310015
浙江大学 计算机科学与技术学院,杭州 310027
基金项目:浙江省重点研发计划项目(2021C01164)
摘    要:针对传统单因子模型无法充分利用时间序列相关信息,以及这些模型对时间序列预测准确性和可靠性较差的问题,提出一种基于多模态信息融合的时间序列预测模型——Skip-Fusion对多模态数据中的文本数据和数值数据进行融合。首先利用BERT(Bidirectional Encoder Representations from Transformers)预训练模型和独热编码对不同类别的文本数据进行编码表示;再使用基于全局注意力机制的预训练模型获得多文本特征融合的单一向量表示;然后将得到的单一向量表示与数值数据按时间顺序对齐;最后通过时间卷积网络(TCN)模型实现文本和数值特征的融合,并通过跳跃连接完成多模态数据的浅层和深层特征的再次融合。在股票价格序列的数据集上进行实验,Skip-Fusion模型的均方根误差(RMSE)和日收益(R)分别为0.492和0.930,均优于现有的单模态模型和多模态融合模型的结果,同时在可决系数(R-Squared)上取得了0.955的拟合优度。实验结果表明,Skip-Fusion模型能够有效进行多模态信息融合并具有较高的预测准确性和可靠性。

关 键 词:全局注意力机制  跳跃连接  多模态融合  时间序列预测  股票价格预测  
收稿时间:2021-06-24
修稿时间:2021-08-24

Time series prediction model based on multimodal information fusion
Minghui WU,Guangjie ZHANG,Canghong JIN.Time series prediction model based on multimodal information fusion[J].journal of Computer Applications,2022,42(8):2326-2332.
Authors:Minghui WU  Guangjie ZHANG  Canghong JIN
Affiliation:School of Computer and Computing Science,Zhejiang University City College,Hangzhou Zhejiang 310015,China
College of Computer Science and Technology,Zhejiang University,Hangzhou Zhejiang 310027,China
Abstract:Aiming at the problem that traditional single factor methods cannot make full use of the relevant information of time series and has the poor accuracy and reliability of time series prediction, a time series prediction model based on multimodal information fusion,namely Skip-Fusion, was proposed to fuse the text data and numerical data in multimodal data. Firstly, different types of text data were encoded by pre-trained Bidirectional Encoder Representations from Transformers (BERT) model and one-hot encoding. Then, the single vector representation of the multi-text feature fusion was obtained by using the pre-trained model based on global attention mechanism. After that, the obtained single vector representation was aligned with the numerical data in time order. Finally, the fusion of text and numerical features was realized through Temporal Convolutional Network (TCN) model, and the shallow and deep features of multimodal data were fused again through skip connection. Experiments were carried out on the dataset of stock price series, Skip-Fusion model obtains the results of 0.492 and 0.930 on the Root Mean Square Error (RMSE) and daily Return (R) respectively, which are better than the results of the existing single-modal and multimodal fusion models. Experimental results show that Skip-Fusion model obtains the goodness of fit of 0.955 on the R-squared, indicating that Skip-Fusion model can effectively carry out multimodal information fusion and has high accuracy and reliability of prediction.
Keywords:global attention mechanism  skip connection  multimodal fusion  time series prediction  stock price prediction  
点击此处可从《计算机应用》浏览原始摘要信息
点击此处可从《计算机应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号