首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Automatic annotation of semantic events allows effective retrieval of video content. In this work, we present solutions for highlights detection in sports videos. The proposed approach exploits the typical structure of a wide class of sports videos, namely those related to sports which are played in delimited venues with playfields of well known geometry, like soccer, basketball, swimming, track and field disciplines, and so on. For these sports, a modeling scheme based on a limited set of visual cues and on finite state machines that encode the temporal evolution of highlights is presented, that is of general applicability to this class of sports. Visual cues encode position and speed information coming from the camera and from the object/athletes that are present in the scene, and are estimated automatically from the video stream. Algorithms for model checking and for visual cues estimation are discussed, as well as applications of the representation to different sport domains.  相似文献   

2.
视频标注是指利用语义索引信息标注视频内容,其目的是方便检索视频。现有视频标注工作使用的视觉底层特征,较难直接用来标注体育视频中的人体专业动作。针对此问题,使用视频图像序列中二维人体关节点特征,建立专业动作知识库来标注体育视频中的专业动作。采用动态规划算法比较视频之间的人体动作差异,并融入协同训练学习算法进行体育视频的半自动标注。以网球比赛视频为测试数据进行实验,结果表明,该算法的动作标注正确率达到81.4%,与现有算法的专业动作标注相比,提高了30.5%。  相似文献   

3.
Semantic annotation of soccer videos: automatic highlights identification   总被引:4,自引:0,他引:4  
Automatic semantic annotation of video streams allows both to extract significant clips for production logging and to index video streams for posterity logging. Automatic annotation for production logging is particularly demanding, as it is applied to non-edited video streams and must rely only on visual information. Moreover, annotation must be computed in quasi real-time. In this paper, we present a system that performs automatic annotation of the principal highlights in soccer video, suited for both production and posterity logging. The knowledge of the soccer domain is encoded into a set of finite state machines, each of which models a specific highlight. Highlight detection exploits visual cues that are estimated from the video stream, and particularly, ball motion, the currently framed playfield zone, players’ positions and colors of players’ uniforms. The highlight models are checked against the current observations, using a model checking algorithm. The system has been developed within the EU ASSAVID project.  相似文献   

4.
5.
The dramatic growth of video content over modern media channels (such as the Internet and mobile phone platforms) directs the interest of media broadcasters towards the topics of video retrieval and content browsing. Several video retrieval systems benefit from the use of semantic indexing based on content, since it allows an intuitive categorization of videos. However, indexing is usually performed through manual annotation, thus introducing potential problems such as ambiguity, lack of information, and non-relevance of index terms. In this paper, we present SHIATSU, a complete system for video retrieval which is based on the (semi-)automatic hierarchical semantic annotation of videos exploiting the analysis of visual content; videos can then be searched by means of attached tags and/or visual features. We experimentally evaluate the performance of SHIATSU on two different real video benchmarks, proving its accuracy and efficiency.  相似文献   

6.
Automatic video segmentation plays a vital role in sports videos annotation. This paper presents a fully automatic and computationally efficient algorithm for analysis of sports videos. Various methods of automatic shot boundary detection have been proposed to perform automatic video segmentation. These investigations mainly concentrate on detecting fades and dissolves for fast processing of the entire video scene without providing any additional feedback on object relativity within the shots. The goal of the proposed method is to identify regions that perform certain activities in a scene. The model uses some low-level feature video processing algorithms to extract the shot boundaries from a video scene and to identify dominant colours within these boundaries. An object classification method is used for clustering the seed distributions of the dominant colours to homogeneous regions. Using a simple tracking method a classification of these regions to active or static is performed. The efficiency of the proposed framework is demonstrated over a standard video benchmark with numerous types of sport events and the experimental results show that our algorithm can be used with high accuracy for automatic annotation of active regions for sport videos.  相似文献   

7.
To support effective multimedia information retrieval, video annotation has become an important topic in video content analysis. Existing video annotation methods put the focus on either the analysis of low-level features or simple semantic concepts, and they cannot reduce the gap between low-level features and high-level concepts. In this paper, we propose an innovative method for semantic video annotation through integrated mining of visual features, speech features, and frequent semantic patterns existing in the video. The proposed method mainly consists of two main phases: 1) Construction of four kinds of predictive annotation models, namely speech-association, visual-association, visual-sequential, and statistical models from annotated videos. 2) Fusion of these models for annotating un-annotated videos automatically. The main advantage of the proposed method lies in that all visual features, speech features, and semantic patterns are considered simultaneously. Moreover, the utilization of high-level rules can effectively complement the insufficiency of statistics-based methods in dealing with complex and broad keyword identification in video annotation. Through empirical evaluation on NIST TRECVID video datasets, the proposed approach is shown to enhance the performance of annotation substantially in terms of precision, recall, and F-measure.  相似文献   

8.
9.
In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.  相似文献   

10.
The paper proposes measures for weighted indexing of sports news videos. The content-based analyses of sports news videos lead to the classification of frames or shots into sports categories. A set of sports categories reported in a given news video can be used as a video representation in visual information retrieval system. However, such an approach does not take into account how many sports events of a given category have been reported and how long these events have been presented in news for televiewers. Weighting of sports categories in a video representation reflecting their importance in a given video or in a whole video data base would be desirable. The effects of applying the proposed measures have been demonstrated in a test video collection. The experiments and evaluations performed on this collection have also shown that we do not need to apply perfect content-based analyses to ensure proper weighted indexing of sports news videos. It is sufficient to recognize the content of only some frames and to determine the number of shots, scenes or pseudo-scenes detected in temporal aggregation process, or even only the number of events of a given sports category in a sports news video being indexed.  相似文献   

11.
12.
13.
Liu  Zihe  Hou  Weiying  Zhang  Jiayi  Cao  Chenyu  Wu  Bin 《Multimedia Tools and Applications》2022,81(4):4909-4934

Automatically interpreting social relations, e.g., friendship, kinship, etc., from visual scenes has huge potential application value in areas such as knowledge graphs construction, person behavior and emotion analysis, entertainment ecology, etc. Great progress has been made in social analysis based on structured data. However, existing video-based methods consider social relationship extraction as a general classification task and categorize videos into only predefined types. Such methods are unable to recognize multiple relations in multi-person videos, which is obviously not consistent with the actual application scenarios. At the same time, videos are inherently multimodal. Subtitles in the video also provide abundant cues for relationship recognition that is often ignored by researchers. In this paper, we introduce and define a new task named “Multiple-Relation Extraction in Videos (MREV)”. To solve the MREV task, we propose the Visual-Textual Fusion (VTF) framework for jointly modeling visual and textual information. For the spatial representation, we not only adopt a SlowFast network to learn global action and scene information, but also exploit the unique cues of face, body and dialogue between characters. For the temporal domain, we propose a Temporal Feature Aggregation module to perform temporal reasoning, which assesses the quality of different frames adaptively. After that, we use a Multi-Conv Attention module to capture the inter-modal correlation and map the features of different modes to a coordinated feature space. By this means, our VTF framework comprehensively exploits abundant multimodal cues for the MREV task and achieves 49.2% and 50.4% average accuracy on a self-constructed Video Multiple-Relation(VMR) dataset and ViSR dataset, respectively. Extensive experiments on VMR dataset and ViSR dataset demonstrate the effectiveness of the proposed framework.

  相似文献   

14.
Sports video annotation is important for sports video semantic analysis such as event detection and personalization. In this paper, we propose a novel approach for sports video semantic annotation and personalized retrieval. Different from the state of the art sports video analysis methods which heavily rely on audio/visual features, the proposed approach incorporates web-casting text into sports video analysis. Compared with previous approaches, the contributions of our approach include the following. 1) The event detection accuracy is significantly improved due to the incorporation of web-casting text analysis. 2) The proposed approach is able to detect exact event boundary and extract event semantics that are very difficult or impossible to be handled by previous approaches. 3) The proposed method is able to create personalized summary from both general and specific point of view related to particular game, event, player or team according to user's preference. We present the framework of our approach and details of text analysis, video analysis, text/video alignment, and personalized retrieval. The experimental results on event boundary detection in sports video are encouraging and comparable to the manually selected events. The evaluation on personalized retrieval is effective in helping meet users' expectations.  相似文献   

15.
In the last half century the most used video storage devices have been the magnetic tapes, where the information are stored in analog format based on the electromagnetism principles. When the digital technique has become the most used, it was necessary to convert analog information in digital format in order to preserve these data. Unfortunately, analog videos may be affected by drops that produce some visual defect which could be acquired during the digitization process. Despite there are many hardware to perform the digitization, just few implement the automatic correction of these defects. In some cases, drop removal is possible through the analog device. However, when a damaged already-converted video is owned, a correction based on image processing technique is the unique way to enhance the videos. In this paper, the drop, also known as “Tracking Error” or “Mistracking,” is analyzed. We propose an algorithm to detect the drops’ visual artifacts in the converted videos, as well as a digital restoration method.  相似文献   

16.
Facing the explosive growth of near-duplicate videos, video archaeology is quite desired to investigate the history of the manipulations on these videos. With the determination of derived videos according to the manipulations, a video migration map can be constructed with the pair-wise relationships in a set of near-duplicate videos. In this paper, we propose an improved video archaeology (I-VA) system by extending our previous work (Shen et al. 2010). The extensions include more comprehensive video manipulation detectors and improved techniques for these detectors. Specially, the detectors are used for two categories of manipulations, i.e., semantic-based manipulations and non-semantic-based manipulations. Moreover, the improved detecting algorithms are more stable. The key of I-VA is the construction of a video migration map, which represents the history of how near-duplicate videos have been manipulated. There are various applications based on the proposed I-VA system, such as better understanding of the meaning and context conveyed by the manipulated videos, improving current video search engines by better presentation based on the migration map, and better indexing scheme based on the annotation propagation. The system is tested on a collection of 12,790 videos and 3,481 duplicates. The experimental results show that I-VA can discover the manipulation relation among the near-duplicate videos effectively.  相似文献   

17.
18.
19.
20.

Saliency prediction models provide a probabilistic map of relative likelihood of an image or video region to attract the attention of the human visual system. Over the past decade, many computational saliency prediction models have been proposed for 2D images and videos. Considering that the human visual system has evolved in a natural 3D environment, it is only natural to want to design visual attention models for 3D content. Existing monocular saliency models are not able to accurately predict the attentive regions when applied to 3D image/video content, as they do not incorporate depth information. This paper explores stereoscopic video saliency prediction by exploiting both low-level attributes such as brightness, color, texture, orientation, motion, and depth, as well as high-level cues such as face, person, vehicle, animal, text, and horizon. Our model starts with a rough segmentation and quantifies several intuitive observations such as the effects of visual discomfort level, depth abruptness, motion acceleration, elements of surprise, size and compactness of the salient regions, and emphasizing only a few salient objects in a scene. A new fovea-based model of spatial distance between the image regions is adopted for considering local and global feature calculations. To efficiently fuse the conspicuity maps generated by our method to one single saliency map that is highly correlated with the eye-fixation data, a random forest based algorithm is utilized. The performance of the proposed saliency model is evaluated against the results of an eye-tracking experiment, which involved 24 subjects and an in-house database of 61 captured stereoscopic videos. Our stereo video database as well as the eye-tracking data are publicly available along with this paper. Experiment results show that the proposed saliency prediction method achieves competitive performance compared to the state-of-the-art approaches.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号