Beyond Temporal Pooling: Recurrence and Temporal Convolutions for Gesture Recognition in Video |
| |
Authors: | Lionel Pigou Aäron van den Oord Sander Dieleman Mieke Van Herreweghe Joni Dambre |
| |
Affiliation: | 1.Data Science Lab, ELIS,Ghent University,Ghent,Belgium;2.Department of Linguistics,Ghent University,Ghent,Belgium |
| |
Abstract: | Recent studies have demonstrated the power of recurrent neural networks for machine translation, image captioning and speech recognition. For the task of capturing temporal structure in video, however, there still remain numerous open research questions. Current research suggests using a simple temporal feature pooling strategy to take into account the temporal aspect of video. We demonstrate that this method is not sufficient for gesture recognition, where temporal information is more discriminative compared to general video classification tasks. We explore deep architectures for gesture recognition in video and propose a new end-to-end trainable neural network architecture incorporating temporal convolutions and bidirectional recurrence. Our main contributions are twofold; first, we show that recurrence is crucial for this task; second, we show that adding temporal convolutions leads to significant improvements. We evaluate the different approaches on the Montalbano gesture recognition dataset, where we achieve state-of-the-art results. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|