首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
We present a method for reshuffle-based 3D interior scene synthesis guided by scene structures. Given several 3D scenes, we form each 3D scene as a structure graph associated with a relationship set. Considering both the object similarity and relation similarity, we then establish a furniture-object-based matching between scene pairs via graph matching. Such a matching allows us to merge the structure graphs into a unified structure, i.e., Augmented Graph (AG). Guided by the AG, we perform scene synthesis by reshuffling objects through three simple operations, i.e., replacing, growing and transfer. A synthesis compatibility measure considering the environment of the furniture objects is also introduced to filter out poor-quality results. We show that our method is able to generate high-quality scene variations and outperforms the state of the art.  相似文献   

2.
Automatic personalized video abstraction for sports videos using metadata   总被引:1,自引:1,他引:0  
Video abstraction is defined as creating a video abstract which includes only important information in the original video streams. There are two general types of video abstracts, namely the dynamic and static ones. The dynamic video abstract is a 3-dimensional representation created by temporally arranging important scenes while the static video abstract is a 2-dimensional representation created by spatially arranging only keyframes of important scenes. In this paper, we propose a unified method of automatically creating these two types of video abstracts considering the semantic content targeting especially on broadcasted sports videos. For both types of video abstracts, the proposed method firstly determines the significance of scenes. A play scene, which corresponds to a play, is considered as a scene unit of sports videos, and the significance of every play scene is determined based on the play ranks, the time the play occurred, and the number of replays. This information is extracted from the metadata, which describes the semantic content of videos and enables us to consider not only the types of plays but also their influence on the game. In addition, user’s preferences are considered to personalize the video abstracts. For dynamic video abstracts, we propose three approaches for selecting the play scenes of the highest significance: the basic criterion, the greedy criterion, and the play-cut criterion. For static video abstracts, we also propose an effective display style where a user can easily access target scenes from a list of keyframes by tracing the tree structures of sports games. We experimentally verified the effectiveness of our method by comparing our results with man-made video abstracts as well as by conducting questionnaires.
Noboru BabaguchiEmail:
  相似文献   

3.
4.
Detection and representation of scenes in videos   总被引:4,自引:0,他引:4  
This paper presents a method to perform a high-level segmentation of videos into scenes. A scene can be defined as a subdivision of a play in which either the setting is fixed, or when it presents continuous action in one place. We exploit this fact and propose a novel approach for clustering shots into scenes by transforming this task into a graph partitioning problem. This is achieved by constructing a weighted undirected graph called a shot similarity graph (SSG), where each node represents a shot and the edges between the shots are weighted by their similarity based on color and motion information. The SSG is then split into subgraphs by applying the normalized cuts for graph partitioning. The partitions so obtained represent individual scenes in the video. When clustering the shots, we consider the global similarities of shots rather than the individual shot pairs. We also propose a method to describe the content of each scene by selecting one representative image from the video as a scene key-frame. Recently, DVDs have become available with a chapter selection option where each chapter is represented by one image. Our algorithm automates this objective which is useful for applications such as video-on-demand, digital libraries, and the Internet. Experiments are presented with promising results on several Hollywood movies and one sitcom.  相似文献   

5.
Since the invention of digital video significant progress has been made in reducing the amount of data needed to be transferred in the World Wide Web while improving viewing experience. However the paradigm of linear behavior has not changed at all. While the feature set of traditional digital video may be sufficient for some applications, there are several use cases where a significantly improved way of interacting with the content is highly desirable. It is possible to organize a video in an interactive and non-linear way. Additional information (for example high resolution images) can be added to any scene of the video. The non-linearity of the video flow and the implementation of additional content not found in traditional videos may lead to an increased download volume and/or a playback with many breaks for downloading missing elements. This paper describes a player framework for interactive non-linear videos. We developed the framework and its associated algorithms to simulate the playback of non-linear video content. It minimizes interruption when the sequence of scenes is directly influenced by interaction, while the traditional viewing experience is not altered. The evaluation showed that it is possible to reach shorter startup times using our strategies than using the strategies of other players. Furthermore, we demonstrated that a prefetching of elements does not lead to an increased download volume in every case. In contrast, it can even decrease the download volume if the right delete strategy is selected. It can be noted that the knowledge of the structure of interactive non-linear videos can be used to minimize startup times at the beginning of scenes while the download volume is not increased.  相似文献   

6.
Grouping video content into semantic segments and classifying semantic scenes into different types are the crucial processes to content-based video organization, management and retrieval. In this paper, a novel approach to automatically segment scenes and semantically represent scenes is proposed. Firstly, video shots are detected using a rough-to-fine algorithm. Secondly, key-frames within each shot are selected adaptively with hybrid features, and redundant key-frames are removed by template matching. Thirdly, spatio-temporal coherent shots are clustered into the same scene based on the temporal constraint of video content and visual similarity between shot activities. Finally, under the full analysis of typical characters on continuously recorded videos, scene content is semantically represented to satisfy human demand on video retrieval. The proposed algorithm has been performed on various genres of films and TV program. Promising experimental results show that the proposed method makes sense to efficient retrieval of interesting video content.
Yuncai LiuEmail:
  相似文献   

7.
Browsing video scenes is just the process to unfold the story scenarios of a long video archive, which can help users to locate their desired video segments quickly and efficiently. Automatic scene detection of a long video stream file is hence the first and crucial step toward a concise and comprehensive content-based representation for indexing, browsing and retrieval purposes. In this paper, we present a novel scene detection scheme for various video types. We first detect video shot using a coarse-to-fine algorithm. The key frames without useful information are detected and removed using template matching. Spatio-temporal coherent shots are then grouped into the same scene based on the temporal constraint of video content and visual similarity of shot activity. The proposed algorithm has been performed on various types of videos containing movie and TV program. Promising experimental results shows that the proposed method makes sense to efficient retrieval of video contents of interest.  相似文献   

8.
图结构多尺度变换的视频异常检测   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 在监控场景的视频异常检测中,存在数据量大和检测速度慢的问题,为此提出图结构多尺度变换下的视频异常检测方法。方法 针对视频中光流特征的空间结构存在关联性,提出构建光流特征网络图结构,并在相关约束下利用光流特征图结构的迭代尺度化变换,有效降低视频异常检测中的光流特征数量,从而完成特征优化。光流特征图结构的尺度化变换首先利用光流特征图结构的图拉普拉斯矩阵所对应的最大特征向量的极性来筛选顶点,完成图的下采样操作;接着利用Kron规约构建顶点间的内在连接,重新构建光流特征图结构。结果 该方法能够提高视频异常检测算法的检测速度,但这是在略微降低检测精度的前提下实现的。在UMN数据集中,当尺度化图结构仅一次时的检测精度下降了3.2%,但检测速度提升了19.1%。这对整个视频集的检测速度的提升有明显效果。当尺度化次数为两次时的检测精度下降了7.3%,但这时检测效果达不到实际要求。此时,当尺度化图结构仅一次时异常检测的效果能达到预期。在Web数据集中,当尺度化图结构仅一次时,检测精度下降了1.9%,但检测速度提升了32%;尺度化两次时,检测精度降低了4.8%,检测速度提升了51%。因此,需要根据检测精度与检测速度的综合考虑后,选择尺度化次数是一次还是两次。但是随着尺度化次数的提高,这时检测效果就不能符合要求。结论 本文利用不规则的网络图结构来更好地表述特征之间的空间关系,并且多尺度变换后图结构也能表述特征间仍然保留有较强的空间关系。在不同的视频监控场景下,根据对检测精度与检测速度的综合考虑后选择合适的尺度化次数,从而实现快速异常检测。  相似文献   

9.
Using string matching to detect video transitions   总被引:2,自引:0,他引:2  
The detection of shot boundaries in videos captures the structure of the image sequences by the identification of transitional effects. This task is important in the video indexing and retrieval domain. The video slice or visual rhythm is a single two-dimensional image sampling that has been used to detect several types of video events, including transitions. We use the longest common subsequence (LCS) between two strings to transform the video slice into one-dimensional signals obtaining a highly simplified representation of the video content. We also developed a chain of mathematical morphology operations over these signals leading to the detection of the most frequent video transitions, namely, cut, fade, and wipe. The algorithms are tested with success with various genres of videos.  相似文献   

10.
3D video billboard clouds reconstruct and represent a dynamic three-dimensional scene using displacement-mapped billboards. They consist of geometric proxy planes augmented with detailed displacement maps and combine the generality of geometry-based 3D video with the regularization properties of image-based 3D video. 3D video billboards are an image-based representation placed in the disparity space of the acquisition cameras and thus provide a regular sampling of the scene with a uniform error model. We propose a general geometry filtering framework which generates time-coherent models and removes reconstruction and quantization noise as well as calibration errors. This replaces the complex and time-consuming sub-pixel matching process in stereo reconstruction with a bilateral filter. Rendering is performed using a GPU-accelerated algorithm which generates consistent view-dependent geometry and textures for each individual frame. In addition, we present a semi-automatic approach for modeling dynamic three-dimensional scenes with a set of multiple 3D video billboards clouds.  相似文献   

11.
12.
The view-independent visualization of 3D scenes is most often based on rendering accurate 3D models or utilizes image-based rendering techniques. To compute the 3D structure of a scene from a moving vision sensor or to use image-based rendering approaches, we need to be able to estimate the motion of the sensor from the recorded image information with high accuracy, a problem that has been well-studied. In this work, we investigate the relationship between camera design and our ability to perform accurate 3D photography, by examining the influence of camera design on the estimation of the motion and structure of a scene from video data. By relating the differential structure of the time varying plenoptic function to different known and new camera designs, we can establish a hierarchy of cameras based upon the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a camera, which we call the full field of view polydioptric camera, for which the motion estimation problem can be solved independently of the depth of the scene which leads to fast and robust algorithms for 3D Photography. In between are multiple view cameras with a large field of view which we have built, as well as omni-directional sensors.  相似文献   

13.
基于时序结构图的视频流描述方法   总被引:1,自引:0,他引:1  
通过对视频流的分解可以获得基于关键帧集的视频流表示,但这种表示方法不能反映出视频流中隐藏的故事发展关系,为揭示这种关系,提出了一种视频流的快速聚类算法,用于对视频流分解单元进行相关性分析,该算法通过检测视频镜头间的相似性和连续性,实现把来自同一摄像机的视频镜头归并入同一视频类,并帱此得到而且为矿山频流的快速浏览和检索提供了新的思路。  相似文献   

14.
一种有效的视频场景检测方法   总被引:3,自引:2,他引:3  
合理地组织视频数据对于基于内容的视频分析和应用有着重要的意义。现有的基于镜头的视频分析方法由于镜头信息粒度太小而不能反映视频语义上的联系,因此有必要将视频内容按照高层语义单元——场景进行组织。提出了一种快速有效的视频场景检测方法,根据电影编辑的原理,对视频场景内容的发展模式进行了分类,给出了场景构造的原则;提出一种新的基于滑动镜头窗的组合方法,将相似内容的镜头组织成为镜头类;定义了镜头类相关性函数来衡量镜头类之间的相关性并完成场景的生成。实验结果证明了该方法的快速有效性。  相似文献   

15.
Video provides strong cues for automatic road extraction that are not available in static aerial images. In video from a static camera, or stabilized (or geo-referenced) aerial video data, motion patterns within a scene enable function attribution of scene regions. A “road”, for example, may be defined as a path of consistent motion — a definition which is valid in a large and diverse set of environments. The spatio-temporal structure tensor field is an ideal representation of the image derivative distribution at each pixel because it can be updated in real time as video is acquired. An eigen-decomposition of the structure tensor encodes both the local scene motion and the variability in the motion. Additionally, the structure tensor field can be factored into motion components, allowing explicit determination of traffic patterns in intersections. Example results of a real time system are shown for an urban scene with both well-traveled and infrequently traveled roads, indicating that both can be discovered simultaneously. The method is ideal in urban traffic scenes, which are the most difficult to analyze using static imagery.  相似文献   

16.
In this paper we present a scalable 3D video framework for capturing and rendering dynamic scenes. The acquisition system is based on multiple sparsely placed 3D video bricks, each comprising a projector, two grayscale cameras, and a color camera. Relying on structured light with complementary patterns, texture images and pattern-augmented views of the scene are acquired simultaneously by time-multiplexed projections and synchronized camera exposures. Using space–time stereo on the acquired pattern images, high-quality depth maps are extracted, whose corresponding surface samples are merged into a view-independent, point-based 3D data structure. This representation allows for effective photo-consistency enforcement and outlier removal, leading to a significant decrease of visual artifacts and a high resulting rendering quality using EWA volume splatting. Our framework and its view-independent representation allow for simple and straightforward editing of 3D video. In order to demonstrate its flexibility, we show compositing techniques and spatiotemporal effects.  相似文献   

17.
一种层次的电影视频摘要生成方法   总被引:1,自引:0,他引:1       下载免费PDF全文
合理地组织视频数据对于基于内容的视频分析和检索有着重要的意义。提出了一种基于运动注意力模型的电影视频摘要生成方法。首先给出了一种基于滑动镜头窗的聚类算法将相似的镜头组织成为镜头类;然后根据电影视频场景内容的发展模式,在定义两个镜头类的3种时序关系的基础上,提出了一种基于镜头类之间的时空约束关系的场景检测方法;最后利用运动注意力模型选择场景中的重要镜头和代表帧,由选择的代表帧集合和重要镜头的关键帧集合建立层次视频摘要(场景级和镜头级)。该方法较全面地涵盖了视频内容,又突出了视频中的重要内容,能够很好地应用于电影视频的快速浏览和检索。  相似文献   

18.
Motion layer extraction in the presence of occlusion using graph cuts   总被引:1,自引:0,他引:1  
Extracting layers from video is very important for video representation, analysis, compression, and synthesis. Assuming that a scene can be approximately described by multiple planar regions, this paper describes a robust and novel approach to automatically extract a set of affine or projective transformations induced by these regions, detect the occlusion pixels over multiple consecutive frames, and segment the scene into several motion layers. First, after determining a number of seed regions using correspondences in two frames, we expand the seed regions and reject the outliers employing the graph cuts method integrated with level set representation. Next, these initial regions are merged into several initial layers according to the motion similarity. Third, an occlusion order constraint on multiple frames is explored, which enforces that the occlusion area increases with the temporal order in a short period and effectively maintains segmentation consistency over multiple consecutive frames. Then, the correct layer segmentation is obtained by using a graph cuts algorithm and the occlusions between the overlapping layers are explicitly determined. Several experimental results are demonstrated to show that our approach is effective and robust.  相似文献   

19.
A long-standing goal in scene understanding is to obtain interpretable and editable representations that can be directly constructed from a raw monocular RGB-D video, without requiring specialized hardware setup or priors. The problem is significantly more challenging in the presence of multiple moving and/or deforming objects. Traditional methods have approached the setup with a mix of simplifications, scene priors, pretrained templates, or known deformation models. The advent of neural representations, especially neural implicit representations and radiance fields, opens the possibility of end-to-end optimization to collectively capture geometry, appearance, and object motion. However, current approaches produce global scene encoding, assume multiview capture with limited or no motion in the scenes, and do not facilitate easy manipulation beyond novel view synthesis. In this work, we introduce a factored neural scene representation that can directly be learned from a monocular RGB-D video to produce object-level neural presentations with an explicit encoding of object movement (e.g., rigid trajectory) and/or deformations (e.g., nonrigid movement). We evaluate ours against a set of neural approaches on both synthetic and real data to demonstrate that the representation is efficient, interpretable, and editable (e.g., change object trajectory). Code and data are available at: http://geometry.cs.ucl.ac.uk/projects/2023/factorednerf/ .  相似文献   

20.
Exploring video content structure for hierarchical summarization   总被引:4,自引:0,他引:4  
In this paper, we propose a hierarchical video summarization strategy that explores video content structure to provide the users with a scalable, multilevel video summary. First, video-shot- segmentation and keyframe-extraction algorithms are applied to parse video sequences into physical shots and discrete keyframes. Next, an affinity (self-correlation) matrix is constructed to merge visually similar shots into clusters (supergroups). Since video shots with high similarities do not necessarily imply that they belong to the same story unit, temporal information is adopted by merging temporally adjacent shots (within a specified distance) from the supergroup into each video group. A video-scene-detection algorithm is thus proposed to merge temporally or spatially correlated video groups into scenario units. This is followed by a scene-clustering algorithm that eliminates visual redundancy among the units. A hierarchical video content structure with increasing granularity is constructed from the clustered scenes, video scenes, and video groups to keyframes. Finally, we introduce a hierarchical video summarization scheme by executing various approaches at different levels of the video content hierarchy to statically or dynamically construct the video summary. Extensive experiments based on real-world videos have been performed to validate the effectiveness of the proposed approach.Published online: 15 September 2004 Corespondence to: Xingquan ZhuThis research has been supported by the NSF under grants 9972883-EIA, 9974255-IIS, 9983248-EIA, and 0209120-IIS, a grant from the state of Indiana 21th Century Fund, and by the U.S. Army Research Laboratory and the U.S. Army Research Office under grant DAAD19-02-1-0178.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号