首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
Video remains the method of choice for capturing temporal events. However, without access to the underlying 3D scene models, it remains difficult to make object level edits in a single video or across multiple videos. While it may be possible to explicitly reconstruct the 3D geometries to facilitate these edits, such a workflow is cumbersome, expensive, and tedious. In this work, we present a much simpler workflow to create plausible editing and mixing of raw video footage using only sparse structure points (SSP) directly recovered from the raw sequences. First, we utilize user‐scribbles to structure the point representations obtained using structure‐from‐motion on the input videos. The resultant structure points, even when noisy and sparse, are then used to enable various video edits in 3D, including view perturbation, keyframe animation, object duplication and transfer across videos, etc. Specifically, we describe how to synthesize object images from new views adopting a novel image‐based rendering technique using the SSPs as proxy for the missing 3D scene information. We propose a structure‐preserving image warping on multiple input frames adaptively selected from object video, followed by a spatio‐temporally coherent image stitching to compose the final object image. Simple planar shadows and depth maps are synthesized for objects to generate plausible video sequence mimicking real‐world interactions. We demonstrate our system on a variety of input videos to produce complex edits, which are otherwise difficult to achieve.  相似文献   

2.
We present a novel computational framework for physically and chemically‐based simulations of analog alternative photographic processes. In the real world, these processes allow the creation of very personal and unique depictions due to the combination of the chemicals used, the physical interaction with liquid solutions, and the individual craftsmanship of the artist. Our work focuses not only on achieving similar compelling results, but on the manual process as well, introducing a novel exploratory approach for interactive digital image creation and manipulation. With such an emphasis on the user interaction, our simulations are devised to run on tablet devices; thus we propose the combination of a lightweight data‐driven model to simulate the chemical reactions involved, with efficient fluids simulations that modulate them. This combination allows realistic gestures‐based user interaction with constant visual feedback in real‐time. Using the proposed framework, we have built two prototypes with different tradeoffs between realism and flexibility, showing its potential to build novel image editing tools.  相似文献   

3.
In this paper we present a hybrid approach to reconstruct hair dynamics from multi‐view video sequences, captured under uncontrolled lighting conditions. The key of this method is a refinement approach that combines image‐based reconstruction techniques with physically based hair simulation. Given an initially reconstructed sequence of hair fiber models, we develop a hair dynamics refinement system using particle‐based simulation and incompressible fluid simulation. The system allows us to improve reconstructed hair fiber motions and complete missing fibers caused by occlusion or tracking failure. The refined space‐time hair dynamics are consistent with video inputs and can be also used to generate novel hair animations of different hair styles. We validate this method through various real hair examples.  相似文献   

4.
Compared to still image editing, content-based video editing faces the additional challenges of maintaining the spatiotemporal consistency with respect to geometry. This brings up difficulties of seamlessly modifying video content, for instance, inserting or removing an object. In this paper, we present a new video editing system for creating spatiotemporally consistent and visually appealing refilming effects. Unlike the typical filming practice, our system requires no labor-intensive construction of 3D models/surfaces mimicking the real scene. Instead, it is based on an unsupervised inference of view-dependent depth maps for all video frames. We provide interactive tools requiring only a small amount of user input to perform elementary video content editing, such as separating video layers, completing background scene, and extracting moving objects. These tools can be utilized to produce a variety of visual effects in our system, including but not limited to video composition, "predatorrdquo effect, bullet-time, depth-of-field, and fog synthesis. Some of the effects can be achieved in real time.  相似文献   

5.
We present a method for adding artistic control to physics‐based hair simulation. Taking as input an animation of a coarse set of guide hairs, we constrain a subsequent higher‐resolution simulation of detail hairs to follow the input motion in a spatially‐averaged sense. The resulting high‐resolution motion adheres to the artistic intent, but is enhanced with detailed deformations and dynamics generated by physics‐based simulation. The technical core of our approach is formed by a set of tracking constraints, requiring the center of mass of a given subset of detail hair to maintain its position relative to a reference point on the corresponding guide hair. As a crucial element of our formulation, we introduce the concept of dynamically‐changing constraint targets that allow reference points to slide along the guide hairs to provide sufficient flexibility for natural deformations. We furthermore propose to regularize the null space of the tracking constraints based on variance minimization, effectively controlling the amount of spread in the hair. We demonstrate the ability of our tracking solver to generate directable yet natural hair motion on a set of targeted experiments and show its application to production‐level animations.  相似文献   

6.
Animators frequently choreograph complex motions for multiple objects that interact through collision and obstruction. In such situations, the use of physically based dynamics to confer visual realism creates challenging computational problems. Typically forward simulation is well understood, but the inverse problem of motion synthesis—that of synthesizing motions consistent both with physical law and with the animator's requirements—is generally tedious and sometimes intractable. We show how N-body inverse problems can be formulated as optimization tasks. We present a simply stated, but combinatorially formidable example that exhibits all of the essential sources of complexity common to N-body motion synthesis, and show how it can be solved approximately using heuristic methods based on evolutionary computation.  相似文献   

7.
8.
Stitching motions in multiple videos into a single video scene is a challenging task in current video fusion and mosaicing research and film production. In this paper, we present a novel method of video motion stitching based on the similarities of trajectory and position of foreground objects. First, multiple video sequences are registered in a common reference frame, whereby we estimate the static and dynamic backgrounds, with the former responsible for distinguishing the foreground from the background and the static region from the dynamic region, and the latter functioning in mosaicing the warped input video sequences into a panoramic video. Accordingly, the motion similarity is calculated by reference to trajectory and position similarity, whereby the corresponding motion parts are extracted from multiple video sequences. Finally, using the corresponding motion parts, the foregrounds of different videos and dynamic backgrounds are fused into a single video scene through Poisson editing, with the motions involved being stitched together. Our major contributions are a framework of multiple video mosaicing based on motion similarity and a method of calculating motion similarity from the trajectory similarity and the position similarity. Experiments on everyday videos show that the agreement of trajectory and position similarities with the real motion similarity plays a decisive role in determining whether two motions can be stitched. We acquire satisfactory results for motion stitching and video mosaicing.  相似文献   

9.
The appearance of weathering effects on stone is important for creating outdoor scenes in computer graphics. To achieve them, previous research has built upon physical simulation, which, while yielding a degree of realism, is computationally expensive and inapplicable to the situation when the object geometry is unknown. Also, physical simulation requires specific knowledge of the stone properties and environmental processes. In this paper, we present a simple visual simulation pipeline for creating weathering effects on stone within a single image. Two primary effects of stone weathering, i.e., smoothing and roughening, are considered. In addition, erosion on the object silhouette is treated. These challenging effects involve significant geometry changes, which are intractable for previous image‐based editing techniques. The effectiveness of our technique is illustrated on a variety of scenes and types of stone. While it can be fully automatic, it also allows easy user interaction.  相似文献   

10.
A Dynamic Motion Control Technique for Human-like Articulated Figures   总被引:1,自引:0,他引:1  
This paper presents a dynamic motion control technique for human-like articulated figures in a physically based character animation system. This method controls a figure such that the figure tracks input motion specified by a user. When environmental physical input such as an external force or a collision impulse are applied to the figure, this method generates dynamically changing motion in response to the physical input. We have introduced comfort and balance control to compute the angular acceleration of the figure's joints. Our algorithm controls the several parts of a human-like articulated figure separetely through the minimum number of degrees-of-freedom. Using this approach, our algorithm simulates realistic human motions at efficient computational cost. Unlike existing dynamic simulation systems, our method assumes that input motion is already realistic, and is aimed at dynamically changing the input motion in real-time only when unexpected physical input is applied to the figure. As such, our method works efficiently in the framework of current computer games.  相似文献   

11.
One of the best ways to synthesize realistic human motions is to animate characters from captured motion data that inherently respect motion laws. Retargeting and interpolation methods are often used to adapt these motions to different representations of the character and to various environmental constraints but they may introduce physical inaccuracies, although the synthesized motions are natural looking. This paper presents a method for evaluating the physical correctness of retargeted and interpolated locomotions using an inverse dynamics analysis. Furthermore, we propose to improve an initial database with analysed motions that are synthesized again by using a forward dynamics approach. The analysis algorithm consists in determining the resulting forces and torques at joints. With this intention, we develop an automatic creation process of the mass/inertia model of the character. Then using support phase recognition, we compute resulting forces and torques by an inverse dynamics method. The retargeting and the interpolation methods change the physics of the motions. This change is evaluated by using the results of our analysis on artificial and real motions and by using literature results and experimental data from force plates. The evaluation relies on the study of several retargeting and interpolation parameters such as the global size of the character or the structure of the model. The output of this evaluation, the resulting forces and torques at joints, are used to produce physically valid motions by using forward dynamics simulation. With this purpose, we introduce forces and torques normalizations, and finally the synthesized motions may improve the initial database.  相似文献   

12.
We propose a motion planning formulation of overarm throw for a 55-degree-of-freedom biped human multibody system. The unique characteristics of the throwing task—highly redundant, highly nonlinear, and highly dynamic—make the throwing motion simulation challenging in the literature and are addressed within the framework of multibody dynamics and optimization. To generate physically feasible throwing motions in a fully predictive method without input reference from motion capture or animation, rigorous dynamics modeling, such as dynamic balance based on Zero-Moment Point (ZMP) and ground reaction loads, is associated with the constraints. Given the target location and the object mass, the algorithm outputs the motion, required actuator torques, release conditions, and projectile and flight time of the object. Realistic human-like motions of throwing are generated for different input parameters, which demonstrate valid cause–effect relations in terms of both kinematic and kinetic outputs.  相似文献   

13.
While there are various commercial-strength editing tools available today for still images, object-based manipulation of real-world video footage is still a challenging problem. In this system paper, we present a framework for interactive video editing. Our focus is on footage from a single, conventional video camera. By relying on spatio-temporal editing techniques operating on the video cube, we do not need to recover 3D scene geometry. Our framework is capable of removing and inserting objects, object motion editing, non-rigid object deformations, keyframe interpolation, as well as emulating camera motion. We demonstrate how movie shots with moderate complexity can be persuasively modified during post-processing.  相似文献   

14.
Many data‐driven animation techniques are capable of producing high quality motions of human characters. Few techniques, however, are capable of generating motions that are consistent with physically simulated environments. Physically simulated characters, in contrast, are automatically consistent with the environment, but their motions are often unnatural because they are difficult to control. We present a model‐predictive controller that yields natural motions by guiding simulated humans toward real motion data. During simulation, the predictive component of the controller solves a quadratic program to compute the forces for a short window of time into the future. These forces are then applied by a low‐gain proportional‐derivative component, which makes minor adjustments until the next planning cycle. The controller is fast enough for interactive systems such as games and training simulations. It requires no precomputation and little manual tuning. The controller is resilient to mismatches between the character dynamics and the input motion, which allows it to track motion capture data even where the real dynamics are not known precisely. The same principled formulation can generate natural walks, runs, and jumps in a number of different physically simulated surroundings.  相似文献   

15.
Progressive light transport simulations aspire a physically‐based, consistent rendering to obtain visually appealing illumination effects, depth and realism. Thereby, the handling of large scenes is a difficult problem, as in typical scene subdivision approaches the parallel processing requires frequent synchronization due to the bouncing of light throughout the scene. In practice, however, only few object parts noticeably contribute to the radiance observable in the image, whereas large areas play only a minor role. In fact, a mesh simplification of the latter can go unnoticed by the human eye. This particular importance to the visible radiance in the image calls for an output‐sensitive mesh reduction that allows to render originally out‐of‐core scenes on a single machine without swapping of memory. Thus, in this paper, we present a preprocessing step that reduces the scene size under the constraint of radiance preservation with focus on high‐frequency effects such as caustics. For this, we perform a small number of preliminary light transport simulation iterations. Thereby, we identify mesh parts that contribute significantly to the visible radiance in the scene, and which we thus preserve during mesh reduction.  相似文献   

16.
Aging and imperfections provide important visual cues for realism. We present a novel physically‐based approach for simulating the biological aging and decay process in fruits. This method simulates interactions between multiple processes. Our biologically‐derived, reaction‐diffusion model generates growth patterns for areas of fungal and bacterial infection. Fungal colony spread and propagation is affected by both bacterial growth and nutrient depletion. This process changes the physical properties of the surface of the fruit as well as its internal volume substrate. The fruit is physically simulated with parameters such as skin thickness and porosity, water content, flesh rigidity, ambient temperature, humidity, and proximity to other surfaces. Our model produces a simulation that closely mirrors the progression of decay in real fruits under similar parameterized conditions. Additionally, we provide a tool that allows artists to customize the input of the program to produce generalized fruit simulations.  相似文献   

17.
Creating long motion sequences is a time‐consuming task even when motion capture equipment or motion editing tools are used. In this paper, we propose a system for creating a long motion sequence by combining elementary motion clips. The user is asked to first input motions on a timeline. The system then automatically generates a continuous and natural motion. Our system employs four motion synthesis methods: motion transition, motion connection, motion adaptation, and motion composition. Based on the constraints between the feet of the animated character and the ground, and the timing of the input motions, the appropriate method is determined for each pair of overlapped or sequential motions. As the user changes the arrangement of the motion clips, the system interactively changes the output motion. Alternatively, the user can make the system execute an input motion as soon as possible so that it follows the previous motion smoothly. Using our system, users can make use of existing motion clips. Because the entire process is automatic, even novices can easily use our system. A prototype system demonstrates the effectiveness of our approach.  相似文献   

18.
In the past years sophisticated automatic segmentation algorithms for various medical image segmentation problems have been developed. However, there are always cases where automatic algorithms fail to provide an acceptable segmentation. In these cases the user needs efficient segmentation editing tools, a problem which has not received much attention in research. We give a comprehensive overview on segmentation editing for three‐dimensional (3D) medical images. For segmentation editing in two‐dimensional (2D) images, we discuss a sketch‐based approach where the user modifies the segmentation in the contour domain. Based on this 2D interface, we present an image‐based as well as an image‐independent method for intuitive and efficient segmentation editing in 3D in the context of tumour segmentation in computed tomography (CT). Our editing tools have been evaluated on a database containing 1226 representative liver metastases, lung nodules and lymph nodes of different shape, size and image quality. In addition, we have performed a qualitative evaluation with radiologists and technical experts, proving the efficiency of our tools.  相似文献   

19.
In this paper, we present an image editing tool that allows the user to deform images using a sketch‐based interface. The user simply sketches a set of source curves in the input image, and also some target curves that the source curves should be deformed to. Then the moving least squares (MLS) deformation technique [ [SMW06] ] is adapted to produce realistic deformations while satisfying the curves' positional constraints. We also propose a scheme to reduce image fold‐overs in MLS deformations. Our system has a very intuitive user interface, generates physically plausible deformations, and can be easily implemented on the GPU for real‐time performance.  相似文献   

20.
We propose an image editing system for repositioning objects in a single image based on the perspective of the scene. In our system, an input image is transformed into a layer structure that is composed of object layers and a background layer, and then the scene depth is computed from the ground region that is specified by the user using a simple boundary line. The object size and order of overlapping are automatically determined during the reposition based on the scene depth. In addition, our system enables the user to move shadows along with objects naturally by extracting the shadow mattes using only a few user‐specified scribbles. Finally, we demonstrate the versatility of our system through applications to depth‐of‐field effects, fog synthesis and 3D walkthrough in an image.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号