Automatic annotation of tennis games: An integration of audio,vision, and learning |
| |
Authors: | Fei Yan Josef Kittler David Windridge William Christmas Krystian Mikolajczyk Stephen Cox Qiang Huang |
| |
Affiliation: | 1. Centre for Vision, Speech, and Signal Processing, University of Surrey, Guildford GU2 7XH, United Kingdom;2. School of Computing Sciences, University of East Anglia, Norwich NR4 7TJ, United Kingdom |
| |
Abstract: | Fully automatic annotation of tennis game using broadcast video is a task with a great potential but with enormous challenges. In this paper we describe our approach to this task, which integrates computer vision, machine listening, and machine learning. At the low level processing, we improve upon our previously proposed state-of-the-art tennis ball tracking algorithm and employ audio signal processing techniques to detect key events and construct features for classifying the events. At high level analysis, we model event classification as a sequence labelling problem, and investigate four machine learning techniques using simulated event sequences. Finally, we evaluate our proposed approach on three real world tennis games, and discuss the interplay between audio, vision and learning. To the best of our knowledge, our system is the only one that can annotate tennis game at such a detailed level. |
| |
Keywords: | Tennis annotation Object tracking Audio event classification Sequence labelling Structured output learning Hidden Markov model |
本文献已被 ScienceDirect 等数据库收录! |
|