首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Directors employ a process called “color grading” to add color styles to feature films. Color grading is used for a number of reasons, such as accentuating a certain emotion or expressing the signature look of a director. We collect a database of feature film clips and label them with tags such as director, emotion, and genre. We then learn a model that maps from the low‐level color and tone properties of film clips to the associated labels. This model allows us to examine a number of common hypotheses on the use of color to achieve goals, such as specific emotions. We also describe a method to apply our learned color styles to new images and videos. Along with our analysis of color grading techniques, we demonstrate a number of images and videos that are automatically filtered to resemble certain film styles.  相似文献   

2.
We explore creating smooth transitions between videos of different scenes. As in traditional image morphing, good spatial correspondence is crucial to prevent ghosting, especially at silhouettes. Video morphing presents added challenges. Because motions are often unsynchronized, temporal alignment is also necessary. Applying morphing to individual frames leads to discontinuities, so temporal coherence must be considered. Our approach is to optimize a full spatiotemporal mapping between the two videos. We reduce tedious interactions by letting the optimization derive the fine‐scale map given only sparse user‐specified constraints. For robustness, the optimization objective examines structural similarity of the video content. We demonstrate the approach on a variety of videos, obtaining results using few explicit correspondences.  相似文献   

3.
This paper proposes a novel system that “rephotographs” a historical photograph with a collection of images. Rather than finding the accurate viewpoint of the historical photo, users only need to take a number of photographs around the target scene. We adopt the structure from motion technique to estimate the spatial relationship among these photographs, and construct a set of 3D point cloud. Based on the user‐specified correspondences between the projected 3D point cloud and historical photograph, the camera parameters of the historical photograph are estimated. We then combine forward and backward warping images to render the result. Finally, inpainting and content‐preserving warping are used to refine it, and the photograph at the same viewpoint of the historical one is produced by this photo collection.  相似文献   

4.
This paper presents a novel video stabilization approach by leveraging the multiple planes structure of video scene to stabilize inter‐frame motion. As opposed to previous stabilization procedure operating in a single plane, our approach primarily deals with multiplane videos and builds their multiple planes structure for performing stabilization in respective planes. Hence, a robust plane detection scheme is devised to detect multiple planes by classifying feature trajectories according to reprojection errors generated by plane induced homographies. Then, an improved planar stabilization technique is applied by conforming to the compensated homography in each plane. Finally, multiple stabilized planes are coherently fused by content‐preserving image warps to obtain the output stabilized frames. Our approach does not need any stereo reconstruction, yet is able to produce commendable results due to awareness of multiple planes structure in the stabilization. Experimental results demonstrate the effectiveness and efficiency of our approach to robust stabilization on multiplane videos.  相似文献   

5.
Eleven tone‐mapping operators intended for video processing are analyzed and evaluated with camera‐captured and computer‐generated high‐dynamic‐range content. After optimizing the parameters of the operators in a formal experiment, we inspect and rate the artifacts (flickering, ghosting, temporal color consistency) and color rendition problems (brightness, contrast and color saturation) they produce. This allows us to identify major problems and challenges that video tone‐mapping needs to address. Then, we compare the tone‐mapping results in a pair‐wise comparison experiment to identify the operators that, on average, can be expected to perform better than the others and to assess the magnitude of differences between the best performing operators.  相似文献   

6.
We propose a method for creating a bounding volume hierarchy (BVH) that is optimized for all frames of a given animated scene. The method is based on a novel extension of surface area heuristic to temporal domain (T‐SAH). We perform iterative BVH optimization using T‐SAH and create a single BVH accounting for scene geometry distribution at different frames of the animation. Having a single optimized BVH for the whole animation makes our method extremely easy to integrate to any application using BVHs, limiting the per‐frame overhead only to refitting the bounding volumes. We evaluated the T‐SAH optimized BVHs in the scope of real‐time GPU ray tracing. We demonstrate, that our method can handle even highly complex inputs with large deformations and significant topology changes. The results show, that in a vast majority of tested scenes our method provides significantly better run‐time performance than traditional SAH and also better performance than GPU based per‐frame BVH rebuild.  相似文献   

7.
Temporal coherence is an important problem in Non‐Photorealistic Rendering for videos. In this paper, we present a novel approach to enhance temporal coherence in video painting. Instead of painting on video frame, our approach first partitions the video into multiple motion layers, and then places the brush strokes on the layers to generate the painted imagery. The extracted motion layers consist of one background layer and several object layers in each frame. Then, background layers from all the frames are aligned into a panoramic image, on which brush strokes are placed to paint the background in one‐shot. The strokes used to paint object layers are propagated frame by frame using smooth transformations defined by thin plate splines. Once the background and object layers are painted, they are projected back to each frame and blent to form the final painting results. Thanks to painting a single image, our approach can completely eliminate the flickering in background, and temporal coherence on object layers is also significantly enhanced due to the smooth transformation over frames. Additionally, by controlling the painting strokes on different layers, our approach is easy to generate painted video with multi‐style. Experimental results show that our approach is both robust and efficient to generate plausible video painting.  相似文献   

8.
Image matting aims at extracting foreground elements from an image by means of color and opacity (alpha) estimation. While a lot of progress has been made in recent years on improving the accuracy of matting techniques, one common problem persisted: the low speed of matte computation. We present the first real‐time matting technique for natural images and videos. Our technique is based on the observation that, for small neighborhoods, pixels tend to share similar attributes. Therefore, independently treating each pixel in the unknown regions of a trimap results in a lot of redundant work. We show how this computation can be significantly and safely reduced by means of a careful selection of pairs of background and foreground samples. Our technique achieves speedups of up to two orders of magnitude compared to previous ones, while producing high‐quality alpha mattes. The quality of our results has been verified through an independent benchmark. The speed of our technique enables, for the first time, real‐time alpha matting of videos, and has the potential to enable a new class of exciting applications.  相似文献   

9.
We present a non‐photorealistic rendering technique to transform color images and videos into painterly abstractions. It is based on a generalization of the Kuwahara filter that is adapted to the local shape of features, derived from the smoothed structure tensor. Contrary to conventional edge‐preserving filters, our filter generates a painting‐like flattening effect along the local feature directions while preserving shape boundaries. As opposed to conventional painting algorithms, it produces temporally coherent video abstraction without extra processing. The GPU implementation of our method processes video in real‐time. The results have the clearness of cartoon illustrations but also exhibit directional information as found in oil paintings.  相似文献   

10.
Images/videos captured by portable devices (e.g., cellphones, DV cameras) often have limited fields of view. Image stitching, also referred to as mosaics or panorama, can produce a wide angle image by compositing several photographs together. Although various methods have been developed for image stitching in recent years, few works address the video stitching problem. In this paper, we present the first system to stitch videos captured by hand‐held cameras. We first recover the 3D camera paths and a sparse set of 3D scene points using CoSLAM system, and densely reconstruct the 3D scene in the overlapping regions. Then, we generate a smooth virtual camera path, which stays in the middle of the original paths. Finally, the stitched video is synthesized along the virtual path as if it was taken from this new trajectory. The warping required for the stitching is obtained by optimizing over both temporal stability and alignment quality, while leveraging on 3D information at our disposal. The experiments show that our method can produce high quality stitching results for various challenging scenarios.  相似文献   

11.
We propose a new real‐time temporal filtering and antialiasing (AA) method for rasterization graphics pipelines. Our method is based on Pixel History Linear Models (PHLM), a new concept for modeling the history of pixel shading values over time using linear models. Based on PHLM, our method can predict per‐pixel variations of the shading function between consecutive frames. This combines temporal reprojection with per‐pixel shading predictions in order to provide temporally coherent shading, even in the presence of very noisy input images. Our method can address both spatial and temporal aliasing problems under a unique filtering framework that minimizes filtering error through a recursive least squares algorithm. We demonstrate our method working with a commercial deferred shading engine for rasterization and with our own OpenGL deferred shading renderer. We have implemented our method in GPU and it has shown significant reduction of temporal flicker in very challenging scenarios including foliage rendering, complex non‐linear camera motions, dynamic lighting, reflections, shadows and fine geometric details. Our approach, based on PHLM, avoids the creation of visible ghosting artifacts and it reduces the filtering overblur characteristic of temporal deflickering methods. At the same time, the results are comparable to state‐of‐the‐art real‐time filters in terms of temporal coherence.  相似文献   

12.
Image storyboards of films and videos are useful for quick browsing and automatic video processing. A common approach for producing image storyboards is to display a set of selected key‐frames in temporal order, which has been widely used for 2D video data. However, such an approach cannot be applied for 3D animation data because different information is revealed by changing parameters such as the viewing angle and the duration of the animation. Also, the interests of the viewer may be different from person to person. As a result, it is difficult to draw a single image that perfectly abstracts the entire 3D animation data. In this paper, we propose a system that allows users to interactively browse an animation and produce a comic sequence out of it. Each snapshot in the comic optimally visualizes a duration of the original animation, taking into account the geometry and motion of the characters and objects in the scene. This is achieved by a novel algorithm that automatically produces a hierarchy of snapshots from the input animation. Our user interface allows users to arrange the snapshots according to the complexity of the movements by the characters and objects, the duration of the animation and the page area to visualize the comic sequence. Our system is useful for quickly browsing through a large amount of animation data and semi‐automatically synthesizing a storyboard from a long sequence of animation.  相似文献   

13.
We present a new solution for temporal coherence in non‐photorealistic rendering (NPR) of animations. Given the conflicting goals of preserving the 2D aspect of the style and the 3D scene motion, any such solution is a tradeoff. We observe that primitive‐based methods in NPR can be seen as texture‐based methods when using large numbers of primitives, leading to our key insight, namely that this process is similar to sparse convolution noise in procedural texturing. Consequently, we present a new primitive for NPR based on Gabor noise, that preserves the 2D aspect of noise, conveys the 3D motion of the scene, and is temporally continuous. We can thus use standard techniques from procedural texturing to create various styles, which we show for interactive NPR applications. We also present a user study to evaluate this and existing solutions, and to provide more insight in the trade‐off implied by temporal coherence. The results of the study indicate that maintaining coherent motion is important, but also that our new solution provides a good compromise between the 2D aspect of the style and 3D motion.  相似文献   

14.
Mappings between color spaces are ubiquitous in image processing problems such as gamut mapping, decolorization, and image optimization for color‐blind people. Simple color transformations often result in information loss and ambiguities, and one wishes to find an image‐specific transformation that would preserve as much as possible the structure of the original image in the target color space. In this paper, we propose Laplacian colormaps, a generic framework for structure‐preserving color transformations between images. We use the image Laplacian to capture the structural information, and show that if the color transformation between two images preserves the structure, the respective Laplacians have similar eigenvectors, or in other words, are approximately jointly diagonalizable. Employing the relation between joint diagonalizability and commutativity of matrices, we use Laplacians commutativity as a criterion of color mapping quality and minimize it w.r.t. the parameters of a color transformation to achieve optimal structure preservation. We show numerous applications of our approach, including color‐to‐gray conversion, gamut mapping, multispectral image fusion, and image optimization for color deficient viewers.  相似文献   

15.
We present a new image completion method based on an additional large displacement view (LDV) of the same scene for faithfully repairing large missing regions on the target image in an automatic way. A coarse‐to‐fine distortion correction algorithm is proposed to minimize the perspective distortion in the corresponding parts for the common scene regions on the LDV image. First, under the assumption of a planar scene, the LDV image is warped according to a homography to generate the initial correction result. Second, the residual distortions in the common known scene regions are revealed by means of a mismatch detection mechanism and relaxed by energy optimization of overlap correspondences, with the expectations of color constancy and displacement field smoothness. The fundamental matrix for the two views is then computed based on the reliable correspondence set. Third, under the constraints of epipolar geometry, displacement field smoothness and color consistency of the neighboring pixels, the missing pixels are orderly restored according to a specially defined repairing priority function. We finally eliminate the ghost effect between the repaired region and its surroundings by Poisson image blending. Experimental results demonstrate that our method outperforms recent state‐of‐the‐art image completion methods for repairing large missing area with complex structure information.  相似文献   

16.
Color transfer is an image processing technique which can produce a new image combining one source image's contents with another image's color style. While being able to produce convincing results, however, Reinhard et al.'s pioneering work has two problems—mixing up of colors in different regions and the fidelity problem. Many local color transfer algorithms have been proposed to resolve the first problem, but the second problem was paid few attentions. In this paper, a novel color transfer algorithm is presented to resolve the fidelity problem of color transfer in terms of scene details and colors. It's well known that human visual system is more sensitive to local intensity differences than to intensity itself. We thus consider that preserving the color gradient is necessary for scene fidelity. We formulate the color transfer problem as an optimization problem and solve it in two steps—histogram matching and a gradient‐preserving optimization. Following the idea of the fidelity in terms of color and gradient, we also propose a metric for objectively evaluating the performance of example‐based color transfer algorithms. The experimental results show the validity and high fidelity of our algorithm and that it can be used to deal with local color transfer.  相似文献   

17.
The generation of inbetween frames that interpolate a given set of key frames is a major component in the production of a 2D feature animation. Our objective is to considerably reduce the cost of the inbetweening phase by offering an intuitive and effective interactive environment that automates inbetweening when possible while allowing the artist to guide, complement, or override the results. Tight inbetweens, which interpolate similar key frames, are particularly time‐consuming and tedious to draw. Therefore, we focus on automating these high‐precision and expensive portions of the process. We have designed a set of user‐guided semi‐automatic techniques that fit well with current practice and minimize the number of required artist‐gestures. We present a novel technique for stroke interpolation from only two keys which combines a stroke motion constructed from logarithmic spiral vertex trajectories with a stroke deformation based on curvature averaging and twisting warps. We discuss our system in the context of a feature animation production environment and evaluate our approach with real production data.  相似文献   

18.
19.
We introduce a novel efficient technique for automatically transforming a generic renderable 3D scene into a simple graph representation named ExploreMaps, where nodes are nicely placed point of views, called probes, and arcs are smooth paths between neighboring probes. Each probe is associated with a panoramic image enriched with preferred viewing orientations, and each path with a panoramic video. Our GPU‐accelerated unattended construction pipeline distributes probes so as to guarantee coverage of the scene while accounting for perceptual criteria before finding smooth, good looking paths between neighboring probes. Images and videos are precomputed at construction time with off‐line photorealistic rendering engines, providing a convincing 3D visualization beyond the limits of current real‐time graphics techniques. At run‐time, the graph is exploited both for creating automatic scene indexes and movie previews of complex scenes and for supporting interactive exploration through a low‐DOF assisted navigation interface and the visual indexing of the scene provided by the selected viewpoints. Due to negligible CPU overhead and very limited use of GPU functionality, real‐time performance is achieved on emerging web‐based environments based on WebGL even on low‐powered mobile devices.  相似文献   

20.
We present an automatic image‐recoloring technique for enhancing color contrast for dichromats whose computational cost varies linearly with the number of input pixels. Our approach can be efficiently implemented on GPUs, and we show that for typical image sizes it is up to two orders of magnitude faster than the current state‐of‐the‐art technique. Unlike previous approaches, ours preserve temporal coherence and, therefore, is suitable for video recoloring. We demonstrate the effectiveness of our technique by integrating it into a visualization system and showing, for the first time, real‐time high‐quality recolored visualizations for dichromats.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号