[1] BACRY E, MASTROMATTEO I, MUZY J F.Hawkes processes in finance[J].Market Microstructure and Liquidity, 2015, 1(1):1550005. [2] DALEY D J, VERE-JONES D.An introduction to the theory of point processes[M].2nd ed.Berlin, Germany:Springer, 2003. [3] DU N, DAI H J, TRIVEDI R, et al.Recurrent marked temporal point processes:embedding event history to vector[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.New York, USA:ACM Press, 2016:1555-1564. [4] XIAO S, YAN J C, FARAJTABAR M, et al.Learning time series associated event sequences with recurrent point process networks[J].IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(10):3124-3136. [5] MEI H, EISNER J.The neural Hawkes process:a neurally self-modulating multivariate point process[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.Cambridge, USA:MIT Press, 2017:6754-6764. [6] LI S, XIAO S, ZHU S, et al.Learning temporal point processes via reinforcement learning[C]//Proceedings of the 32nd Conference on Neural Information Processing Systems.Cambridge, USA:MIT Press, 2018:10781-10791. [7] UPADHYAY U, DE A, RODRIGUEZ M G.Deep reinforcement learning of marked temporal point processes[C]//Proceedings of the 32nd Conference on Neural Information Processing Systems.Cambridge, USA:MIT Press, 2018:3168-3178. [8] SHAHAM U, YAMADA Y, NEGAHBAN S.Conditional generative adversarial nets[EB/OL].[2021-11-05].https://arxiv.org/pdf/1411.1784.pdf. [9] KUPYN O, BUDZAN V, MYKHAILYCH M, et al.DeblurGAN:blind motion deblurring using conditional adversarial networks[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:8183-8192. [10] KUPYN O, MARTYNIUK T, WU J R, et al.DeblurGAN-v2:deblurring (orders-of-magnitude) faster and better[C]//Proceedings of IEEE/CVF International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2019:8877-8886. [11] AALEN O O, BORGAN O, GJESSING H K.Survival and event history analysis:a process point of view[M].Berlin, Germany:Springer, 2008. [12] 芦佳明, 李晨龙, 魏毅强.自注意力时序点过程生成模型的Wasserstein学习方法[J].计算机应用研究, 2022, 39(2):456-460. LU J M, LI C L, WEI Y Q.Wasserstein learning method for self-attention temporal point process generation model[J].Application Research of Computers, 2022, 39(2):456-460.(in Chinese) [13] XIAO S, FARAJTABAR M, YE X J, et al.Wasserstein learning of deep generative point process models[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.Washington D.C., USA:IEEE Press, 2017:3250-3259. [14] OMI T, UEDA N, AIHARA K.Fully neural network based model for general temporal point processes[C]//Proceedings of the 33rd Conference on Neural Information Processing Systems.Washington D.C., USA:IEEE Press, 2019:15-26. [15] SHCHUR O, BILOLŠ M, GÜNNEMANN S.Intensity-free learning of temporal point processes[C]//Proceedings of the 8th International Conference on Learning Representations.Washington D.C., USA:IEEE Press, 2020:144-150. [16] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al.Generative adversarial networks[EB/OL].[2021-11-05].https://arxiv.org/pdf/1406.2661.pdf. [17] ARJOVSKY M, CHINTALA S, BOTTOU L.Wasserstein GAN[EB/OL].[2021-11-05].https://arxiv.org/pdf/1701.07875v1.pdf. [18] KOSTELICH E J, SCHREIBER T.Noise reduction in chaotic time-series data:a survey of common methods[J].Physical Review E, 1993, 48(3):1752-1763. [19] HOCHREITER S, SCHMIDHUBER J.Long short-term memory[J].Neural Computation, 1997, 9(8):1735-1780. [20] 邱锡鹏.神经网络与深度学习[M].北京:机械工业出版社, 2020. QIU X P.Neural networks and deep learning[M].Beijing:China Machine Press, 2020.(in Chinese) [21] XU Z, DU J, WANG J J, et al.Satellite image prediction relying on GAN and LSTM neural networks[C]//Proceedings of 2019 IEEE International Conference on Communications.Washington D.C., USA:IEEE Press, 2019:1-6. [22] TAO X, GAO H Y, SHEN X Y, et al.Scale-recurrent network for deep image deblurring[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:8174-8182. [23] YOU C, HONG D.Nonlinear blind equalization schemes using complex-valued multilayer feedforward neural networks[J].IEEE Transactions on Neural Networks, 1998, 9(6):1442-1455. [24] KWOK T Y, YEUNG D Y.Constructive algorithms for structure learning in feedforward neural networks for regression problems[J].IEEE Transactions on Neural Networks, 1997, 8(3):630-645. [25] IOFFE S, SZEGEDY C.Batch normalization:accelerating deep network training by reducing internal covariate shift[C]//Proceedings of the 32nd International Conference on Machine Learning.Washington D.C., USA:IEEE Press, 2015:448-456. [26] GULRAJANI I, AHMED F, ARJOVSKY M, et al.Improved training of Wasserstein GANs[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.Washington D.C., USA:IEEE Press, 2017:5769-5779. |