首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Video painting with space-time-varying style parameters   总被引:3,自引:0,他引:3  
Artists use different means of stylization to control the focus on different objects in the scene. This allows them to portray complex meaning and achieve certain artistic effects. Most prior work on painterly rendering of videos, however, uses only a single painting style, with fixed global parameters, irrespective of objects and their layout in the images. This often leads to inadequate artistic control. Moreover, brush stroke orientation is typically assumed to follow an everywhere continuous directional field. In this paper, we propose a video painting system that accounts for the spatial support of objects in the images or videos, and uses this information to specify style parameters and stroke orientation for painterly rendering. Since objects occupy distinct image locations and move relatively smoothly from one video frame to another, our object-based painterly rendering approach is characterized by style parameters that coherently vary in space and time. Space-time-varying style parameters enable more artistic freedom, such as emphasis/de-emphasis, increase or decrease of contrast, exaggeration or abstraction of different objects in the scene in a temporally coherent fashion.  相似文献   

2.
We introduce a novel technique to generate painterly art map (PAM) for 3D non-photorealistic rendering. Our technique can automatically transfer brush stroke textures and color changes to 3D models from samples of a painted image. Therefore, the generation of stylized images/animation in the style of a given artwork can be achieved. This new approach works particularly well for a rich variety of brush strokes ranging from simple 1D and 2D line-art strokes to very complicated ones with significant variations in stroke characteristics. During the rendering/animation process, the coherence of brush stroke textures and color changes over 3D surfaces can be well maintained. With PAM, we can also easily generate the illusion of flow animation over a 3D surface to convey the shape of a model.  相似文献   

3.
We present an algorithm that stylizes an input video into a painterly animation without user intervention. In particular, we focus on pointillist animation with stable temporal coherence. Temporal coherence is an important problem in non-photorealistic rendering for videos. To realize pointillist animation, the various characters of pointillism should be considered in painting process to maintain temporal coherence. For this, weused the particle video algorithm which is a new approach to long-range motion estimation in video. Based on this method, we introduce a method to control the density of particles considering the features of frames and importance maps. Finally, the propagation methods of stroke to minimize flickering effects of brush strokes are introduced.  相似文献   

4.
Painterly rendering with content-dependent natural paint strokes   总被引:1,自引:0,他引:1  
We present a new painterly rendering method that simulates artists’ content-dependent painting process and the natural variation of hand-painted strokes. First, a new stroke layout strategy is proposed to enhance the contrast between large and small paint strokes, which is an important characteristic of hand-painted paintings. Specifically, the input image is partitioned into nonuniform grids according to its importance map, and determined by the grid size, an individually constructed paint stroke is applied in each grid. Second, an anisotropic digital brush is designed to simulate a real paint brush. In particular, each bristle of the digital brush has an individual color, so that strokes rendered by the new brush can have multiple colors and naturally varied textures. Finally, we present a novel method to add lighting effects to the canvas. This lighting imitation method is robust and very easy to implement, and it can significantly improve the quality of rendering. Comparing with traditional painterly rendering approaches, the new method simulates more closely the real painting procedure, and our experimental results show that it can produce vivid paintings with fewer artifacts.  相似文献   

5.
为提高路径追踪渲染3D场景的速度,提出3D场景渲染的视觉显著性驱动间接光照复用算法。首先,根据视觉感知中感兴趣区域显著性高、其他区域显著性低的特点得到场景画面的2D显著性图,该显著性图由图像的颜色信息、边缘信息、深度信息以及运动信息构成。然后,重新渲染高显著性区域的间接光照,而低显著性区域则在满足一定条件的情况下复用上一帧的间接光照,达到加速渲染的目的。实验结果表明:该算法生成画面的全局光照效果真实,在多个实验场景下的渲染速度均有提升,速度最高能达到高质量渲染的5.89倍。  相似文献   

6.
王京  车英慧  郝爱民 《计算机工程》2007,33(20):216-218
针对具有高动态范围材质的场景的渲染问题,给出了将渲染光照模型由传统材质模型扩展为高动态范围材质模型和基于GPU实时渲染的算法。经过此算法得到的高动态范围渲染结果必须通过进一步的处理才能正确地显示在仅具有低动态范围的显示输出设备上。为此,该文给出了一个结合高动态范围材质的实时渲染算法、基于物理的模拟镜头眩光效果算法以及色调映射算法的综合方法,并通过实现验证了此方法。  相似文献   

7.
针对运动物体对图像拼接易造成配准误差和合成鬼影的问题,提出一种多尺度PHOG特征和最优缝合线的运动场景图像拼接算法。首先,在多尺度空间角点检测的基础上,引入分层梯度方向直方图(PHOG)描述方法,生成多尺度PHOG特征,进行稳定配准,避免运动物体的局部影响。然后,通过构建能量函数,采用图割算法搜索几何、灰度差异最小的缝合线,去除运动鬼影。实验结果表明,该方法对存在运动物体的场景拼接具有较高的拼接精度,拼接效果良好。  相似文献   

8.
Increasing the level of detail (LOD) in brushstrokes within areas of interest improved the realism of painterly rendering. Using a modified quad-tree, we segmented an image into areas with similar levels of saliency; each of these segments was then used to control the brush strokes during rendering. We could also simulate real oil painting steps based on saliency information. Our method runs in a reasonable fine and produces results that are visually appealing and competitive with previous techniques.  相似文献   

9.
We present an automatic and robust technique for creating non-photorealistic rendering (NPR) and animation, starting from a video that depicts the shape details and follows the motion of underlying objects. We generate NPR from the initial frame of the source video using a greedy algorithm for stroke placements and models, in combination with a saliency map and a flow-guided difference-of-Gaussian filter. Our stroke model uses a set of triangles whose vertices are particles and whose edges are springs. Using a physics-based framework, the generated and rendered strokes are translated, rotated and deformed by forces exerted from the sequential frames. External forces acting on strokes are calculated according to temporally and spatially smoothed per-pixel optical flow vectors. After simulating each frame, we delete unnecessary strokes and add new strokes for disappearing and appearing objects, but only if necessary to avoid popping and scintillation. Our framework automatically generates the coherent animation of rendered strokes, preserving the appearance details and animating strokes along with the underlying objects. This had been difficult to achieve with previous user-guided methods and automatic but limited transformations methods.  相似文献   

10.
This paper presents an interactive system for creating painterly animation from video sequences. Previous approaches to painterly animation typically emphasize either purely automatic stroke synthesis or purely manual stroke key framing. Our system supports a spectrum of interaction between these two approaches which allows the user more direct control over stroke synthesis. We introduce an approach for controlling the results of painterly animation: keyframed Control Strokes can affect automatic stroke's placement, orientation, movement, and color. Furthermore, we introduce a new automatic synthesis algorithm that traces strokes through a video sequence in a greedy manner, but, instead of a vector field, uses an objective function to guide placement. This allows the method to capture fine details, respect region boundaries, and achieve greater temporal coherence than previous methods. All editing is performed with a WYSIWYG interface where the user can directly refine the animation. We demonstrate a variety of examples using both automatic and user-guided results, with a variety of styles and source videos.  相似文献   

11.
12.
Image-based color ink diffusion rendering   总被引:3,自引:0,他引:3  
This paper proposes an image-based painterly rendering algorithm for automatically synthesizing an image with color ink diffusion. We suggest a mathematical model with a physical base to simulate the phenomenon of color colloidal ink diffusing into absorbent paper. Our algorithm contains three main parts: a feature extraction phase, a Kubelka-Munk (KM) color mixing phase, and a color ink diffusion synthesis phase. In the feature extraction phase, the information of the reference image is simplified by luminance division and color segmentation. In the color mixing phase, the KM theory is employed to approximate the result when one pigment is painted upon another pigment layer. Then, in the color ink diffusion synthesis phase, the physically-based model that we propose is employed to simulate the result of color ink diffusion in absorbent paper using a texture synthesis technique. Our image-based ink diffusing rendering (IBCIDR) algorithm eliminates the drawback of conventional Chinese ink simulations, which are limited to the black ink domain, and our approach demonstrates that, without using any strokes, a color image can be automatically converted to the diffused ink style with a visually pleasing appearance  相似文献   

13.
The main aim of this paper is to propose a new neural algorithm to perform a segmentation of an observed scene in regions corresponding to different moving objects, by analysing a time-varying image sequence. The method consists of a classification step, where the motion of small patches is recovered through an optimisation approach, and a segmen-tation step merging neighbouring patches characterised by the same motion. Classification of motion is performed without optical flow computation. Three-dimensional motion parameter estimates are obtained directly from the spatial and temporal image gradients by minimising an appropriate energy function with a Hopfield-like neural network. Network convergence is accelerated by integrating the quantitative estimation of the motion parameters with a qualitative estimate of dominant motion using the geometric theory of differential equations.  相似文献   

14.
This paper proposes a new neural algorithm to perform the segmentation of an observed scene into regions corresponding to different moving objects by analyzing a time-varying images sequence.The method consists of a classification step,where the motion of small patches is characterized through an optimization approach,and a segmentation step merging meighboring patches characterized by the same motion.Classification of motion is performed without optical flow computation,but considering only the spatial and temporal image gradients into an appropriate energy function minimized with a Hopfield-like neural network giving as output directly the 3D motion parameter estimates.Network convergence is accelerated by integrating the quantitative estimation of motion parameters with a qualitative estimate of dominant motion using the geometric theory of differential equations.  相似文献   

15.
《Pattern recognition letters》2003,24(1-3):113-128
This paper presents an efficient region-based motion segmentation method for segmentation of moving objects in a traffic scene with a focus on a video monitoring system (VMS). The presented method consists of two phases: first, in the motion detection phase, the positions of moving objects in a scene are determined using an adaptive thresholding method. To detect varying regions by moving objects, instead of determining the threshold value manually, we use an adaptive thresholding method to automatically choose the threshold value. Second, in the motion segmentation phase, pixels that have similar intensity and motion information are segmented using a weighted k-means clustering algorithm to the binary region of the motion mask obtained in the motion detection. In this way, we need not process a whole image so computation time is reduced. Experimental results demonstrate robustness not only in the variation of luminance conditions and changes in environmental conditions, but also for occlusions among multiple moving objects.  相似文献   

16.
An artist usually does not draw all the areas in a picture homogeneously but tries to make the work more expressive by emphasizing what is important while eliminating irrelevant details. Creating expressive painterly images with such accentuation effect remains to be a challenge because of the subjectivity of information selection. This paper presents a novel technique for automatically converting an input image into a pencil drawing with such emphasis and elimination effect. The proposed technique utilizes saliency map, a computational model for visual attention, to predict the focus of attention in the input image. A new level of detail controlling algorithm using multi-resolution pyramid is also developed for locally adapting the rendering parameters, such as the density, orientation and width of pencil strokes, to the degree of attention defined by saliency map. Experimental results show that the images generated with the proposed method present the visual effect similar to that of the real pencil drawing and can successfully direct the viewer’s attention toward the focus.  相似文献   

17.
一种用于动态场景的全景表示方法   总被引:3,自引:0,他引:3  
杜威  李华 《计算机学报》2002,25(9):968-975
针对全景图无法表示动态场景这一问题,提出一种用于动态场景的全景图表示方法,将视频纹理和全景图结合起来,构造动态全景图。系统首先将一系列定点拍摄的图像拼接成全景图,然后用摄像机拍摄场景中周期或随机运动的物体,提取视频纹理,最后视频纹理与全景图对准并融合,生成动态全景图。动态全景图既保持静态全景图全视角漫游的优点,又使得场景具有动态的特征,极大地增强漫游的真实感。  相似文献   

18.
Interactive rendering of soft shadows (or penumbra) in scenes with moving objects is a challenging problem. High quality walkthrough rendering of static scenes with penumbra can be achieved using pre-calculated discontinuity meshes, which provide a triangulation well adapted to penumbral boundaries, and backprojections which provide exact illumination computation at vertices very efficiently. However, recomputation of the complete mesh and back-projection structures at each frame is prohibitively expensive in environments with changing geometry. This recomputation would in any case be wasteful: only a limited part of these structures actually needs to be recalculated. We present a novel algorithm which uses spatial coherence of movement as well as the rich visibility information existing in the discontinuity mesh to avoid unnecessary recomputation after object motion. In particular we isolate all modifications required for the update of the discontinuity mesh by using an augmented spatial subdivision structure and we restrict intersections of discontinuity surfaces with the scene. In addition, we develop an algorithm which identifies visibility changes by exploiting information contained in the planar discontinuity mesh of each scene polygon, obviating the need for many expensive searches in 3D space. A full implementation of the algorithm is presented, which allows interactive updates of high-quality soft shadows for scenes of moderate complexity. The algorithm can also be directly applied to global illumination.  相似文献   

19.
基于帧间差分的自适应运动目标检测方法*   总被引:6,自引:1,他引:5  
本文提出了一种基于帧间差分的自适应运动目标检测算法。算法利用直方图统计各像素点处最大概率灰度的方法提取出连续视频的背景图像;相邻帧利用帧差法得到运动区域图像;利用运动区域图像与背景图像差分的方法提取出运动目标。实验结果表明,该算法能在多个不确定性因素的序列视频中较好的提取背景图像,能及时响应实际场景变化,提高运动目标检测的质量。  相似文献   

20.
一种基于笔刷的非真实感绘制算法的研究   总被引:1,自引:0,他引:1  
桂斌  何广明 《计算机应用与软件》2009,26(10):225-227,258
非真实感绘制是计算机图形学的一个分支,基于笔刷的非真实感绘制是非真实感绘制的重要内容.给出一种基于笔刷的多重非真实感绘制算法.算法运用源图像灰度图像的颜色梯度法线方向作为笔刷的方向,以颜色空间距离来控制笔刷的绘制.实验结果表明,对于给定的输入图像,该算法能够有效地生成具有油画艺术风格的图像.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号