首页 | 本学科首页   官方微博 | 高级检索  
     


Personalized production of basketball videos from multi-sensored data under limited display resolution
Affiliation:1. Queen Mary, University of London, London E1 4NS, UK;2. Stanford University, Stanford, California 94305, USA;1. Department of Mechanical Engineering, University of California Riverside, Riverside, CA 92521, USA;1. Graduate School of Chosun University, 309 Pilmundaero, Dong-gu, Gwangju 61452, Republic of Korea;2. Department of Mechanical Engineering, Chosun University, 309 Pilmundaero, Dong-Ku, Gwangju 61452, Republic of Korea;1. Pre-clinical Imaging Research Group, University Medicine Rostock, Ernst-Heydemann-Str. 8, 18057 Rostock, Germany;2. Institute for Diagnostic Radiology and Neuroradiology, University Medicine Greifswald, Greifswald, Germany;3. Berlin Ultrahigh Field Facility (B.U.F.F.), Max-Delbrueck-Center for Molecular Medicine, Berlin, Germany;4. MRI.TOOLS GmbH, Berlin, Germany;5. Institute of Neuroradiology, University Medicine Goettingen, Goettingen, Germany;6. Department of Ophthalmology, University Medicine Rostock, Rostock, Germany;1. Department of Atmospheric Sciences, Institute of Astronomy, Geophysics, and Atmospheric Sciences, University of São Paulo (IUSP), Rua do Matão 1226, São Paulo 05508-090, Brazil;2. Escola de Engenharia, Universidade Presbiteriana Mackenzie, Rua Consolação 896, São Paulo 01302-907, Brazil
Abstract:Integration of information from multiple cameras is essential in television production or intelligent surveillance systems. We propose an autonomous system for personalized production of basketball videos from multi-sensored data under limited display resolution. The problem consists in selecting the right view to display among the multiple video streams captured by the investigated camera network. A view is defined by the camera index and the parameters of the image cropped within the selected camera. We propose criteria for optimal planning of viewpoint coverage and camera selection. Perceptual comfort is discussed as well as efficient integration of contextual information, which is implemented by smoothing generated viewpoint/camera sequences to alleviate flickering visual artifacts and discontinuous story-telling artifacts. We design and implement the estimation process and verify it by experiments, which shows that our method efficiently reduces those artifacts.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号