首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present an image editing tool that allows the user to deform images using a sketch‐based interface. The user simply sketches a set of source curves in the input image, and also some target curves that the source curves should be deformed to. Then the moving least squares (MLS) deformation technique [ [SMW06] ] is adapted to produce realistic deformations while satisfying the curves' positional constraints. We also propose a scheme to reduce image fold‐overs in MLS deformations. Our system has a very intuitive user interface, generates physically plausible deformations, and can be easily implemented on the GPU for real‐time performance.  相似文献   

2.
Repeated scene elements are copious and ubiquitous in natural images. Cutout of those repeated elements usually involves tedious and laborious user interaction by previous image segmentation methods. In this paper, we present RepSnapping, a novel method oriented to cutout of repeated scene elements with much less user interaction. By exploring inherent similarity between repeated elements, a new optimization model is introduced to thread correlated elements in the segmentation procedure. The model proposed here enables efficient solution using max‐flow/min cut on an extended graph. Experiments indicate that RepSnapping facilitates cutout of repeated elements better than the state‐of‐the‐art interactive image segmentation and repetition detection methods.  相似文献   

3.
Image vectorization is an important yet challenging problem, especially when the input image has rich content. In this paper, we develop a novel method for automatically vectorizing natural images with feature‐aligned quad‐dominant meshes. Inspired by the quadrangulation methods in 3D geometry processing, we propose a new directional field optimization technique by encoding the color gradients, sidestepping the explicit computing of salient image features. We further compute the anisotropic scales of the directional field by accommodating the distance among image features. Our method is fully automatic and efficient, which takes only a few seconds for a 400×400 image on a normal laptop. We demonstrate the effectiveness of the proposed method on various image editing applications.  相似文献   

4.
Creating long motion sequences is a time‐consuming task even when motion capture equipment or motion editing tools are used. In this paper, we propose a system for creating a long motion sequence by combining elementary motion clips. The user is asked to first input motions on a timeline. The system then automatically generates a continuous and natural motion. Our system employs four motion synthesis methods: motion transition, motion connection, motion adaptation, and motion composition. Based on the constraints between the feet of the animated character and the ground, and the timing of the input motions, the appropriate method is determined for each pair of overlapped or sequential motions. As the user changes the arrangement of the motion clips, the system interactively changes the output motion. Alternatively, the user can make the system execute an input motion as soon as possible so that it follows the previous motion smoothly. Using our system, users can make use of existing motion clips. Because the entire process is automatic, even novices can easily use our system. A prototype system demonstrates the effectiveness of our approach.  相似文献   

5.
We present GEMSe, an interactive tool for exploring and analyzing the parameter space of multi‐channel segmentation algorithms. Our targeted user group are domain experts who are not necessarily segmentation specialists. GEMSe allows the exploration of the space of possible parameter combinations for a segmentation framework and its ensemble of results. Users start with sampling the parameter space and computing the corresponding segmentations. A hierarchically clustered image tree provides an overview of variations in the resulting space of label images. Details are provided through exemplary images from the selected cluster and histograms visualizing the parameters and the derived output in the selected cluster. The correlation between parameters and derived output as well as the effect of parameter changes can be explored through interactive filtering and scatter plots. We evaluate the usefulness of GEMSe through expert reviews and case studies based on three different kinds of datasets: A synthetic dataset emulating the combination of 3D X‐ray computed tomography with data from K‐Edge spectroscopy, a three‐channel scan of a rock crystal acquired by a Talbot‐Lau grating interferometer X‐ray computed tomography device, as well as a hyperspectral image.  相似文献   

6.
We propose an image editing system for repositioning objects in a single image based on the perspective of the scene. In our system, an input image is transformed into a layer structure that is composed of object layers and a background layer, and then the scene depth is computed from the ground region that is specified by the user using a simple boundary line. The object size and order of overlapping are automatically determined during the reposition based on the scene depth. In addition, our system enables the user to move shadows along with objects naturally by extracting the shadow mattes using only a few user‐specified scribbles. Finally, we demonstrate the versatility of our system through applications to depth‐of‐field effects, fog synthesis and 3D walkthrough in an image.  相似文献   

7.
Segmentation of volumetric data is an important part of many analysis pipelines, but frequently requires manual inspection and correction. While plenty of volume editing techniques exist, it remains cumbersome and errorprone for the user to find and select appropriate regions for editing. We propose an approach to improve volume editing by detecting potential segmentation defects while considering the underlying structure of the object of interest. Our method is based on a novel histogram dissimilarity measure between individual regions, derived from structural information extracted from the initial segmentation. Based on this information, our interactive system guides the user towards potential defects, provides integrated tools for their inspection, and automatically generates suggestions for their resolution. We demonstrate that our approach can reduce interaction effort and supports the user in a comprehensive investigation for high‐quality segmentations.  相似文献   

8.
The ability to interpolate between images taken at different time and viewpoints directly in image space opens up new possiblities. The goal of our work is to create plausible in‐between images in real time without the need for an intermediate 3D reconstruction. This enables us to also interpolate between images recorded with uncalibrated and unsynchronized cameras. In our approach we use a novel discontiniuity preserving image deformation model to robustly estimate dense correspondences based on local homographies. Once correspondences have been computed we are able to render plausible in‐between images in real time while properly handling occlusions. We discuss the relation of our approach to human motion perception and other image interpolation techniques.  相似文献   

9.
We present a framework for interactive sketching that allows users to create three‐dimensional (3D) architectural models quickly and easily from a source drawing. The sketching process has four steps. (1) The user calibrates a viewing camera by specifying the origin and vanishing points of the drawing. (2) The user outlines surface polygons in the drawing. (3) A 3D reconstruction algorithm uses perceptual constraints to determine the closest visual fit for the polygon. (4) The user can then adjust aesthetic controls to produce several stylistic effects in the scene: a smooth transition between day and night rendering, a horizon knockout effect and entourage figures. The major advantage of our approach lies in the combination of perception‐based techniques, which allow us to minimize unnecessary interactions, and a hinging‐angle scheme, which shows significant improvement in numerical stability over previous optimization‐based 3D reconstruction algorithms. We also demonstrate how our reconstruction algorithm can be extended to work with perspective images, a feature unavailable in previous approaches.  相似文献   

10.
We present a novel image‐based technique for modeling complex unfoliaged trees. Existing tree modeling tools either require capturing a large number of views for dense 3D reconstruction or rely on user inputs and botanic rules to synthesize natural‐looking tree geometry. In this paper, we focus on faithfully recovering real instead of realistically‐looking tree geometry from a sparse set of images. Our solution directly integrates 2D/3D tree topology as shape priors into the modeling process. For each input view, we first estimate a 2D skeleton graph from its matte image and then find a 2D skeleton tree from the graph by imposing tree topology. We develop a simple but effective technique for computing the optimal 3D skeleton tree most consistent with the 2D skeletons. For each edge in the 3D skeleton tree, we further apply volumetric reconstruction to recover its corresponding curved branch. Finally, we use piecewise cylinders to approximate each branch from the volumetric results. We demonstrate our framework on a variety of trees to illustrate the robustness and usefulness of our technique.  相似文献   

11.
Recent 3D sketch tools produce networks of three‐space curves that suggest the contours of shapes. The shapes may be non‐manifold, closed three‐dimensional, open two‐dimensional, or mixed. We describe a system that automatically generates intuitively appealing piecewise‐smooth surfaces from such a curve network, and an intelligent user interface for modifying the automatically chosen surface patches. Both the automatic and the semi‐automatic parts of the system use a linear algebra representation of the set of surface patches to track the topology. On complicated inputs from ILoveSketch [ [BBS08] ], our system allows the user to build the desired surface with just a few mouse‐clicks.  相似文献   

12.
We present a new intuitive UI, which we call cross‐boundary brushes, for interactive mesh decomposition. The user roughly draws one or more strokes across a desired cut and our system automatically returns a best cut running through all the strokes. By the different natures of part components (i.e., semantic parts) and patch components (i.e., flatter surface patches) in general models, we design two corresponding brushes: part‐brush and patch‐brush. These two types of brushes share a common user interface, enabling easy switch between them. The part‐brush executes a cut along an isoline of a harmonic field driven by the user‐specified strokes. We show that the inherent smoothness of the harmonic field together with a carefully designed isoline selection scheme lead to segmentation results that are insensitive to noise, pose, tessellation and variation in user's strokes. Our patch‐brush uses a novel facet‐based surface metric that alleviates sensitivity to noise and fine details common in region‐growing algorithms. Extensive experimental results demonstrate that our cutting tools can produce user‐desired segmentations for a wide variety of models even with single strokes. We also show that our tools outperform the state‐of‐art interactive segmentation tools in terms of ease of use and segmentation quality.  相似文献   

13.
Usually visualization is applied to gain insight into data. Yet consuming the data in form of visual representation is not always enough. Instead, users need to edit the data, preferably through the same means used to visualize them. In this work, we present a semi‐automatic approach to visual editing of graphs. The key idea is to use an interactive EditLens that defines where an edit operation affects an already customized and established graph layout. Locally optimal node positions within the lens and edge routes to connected nodes are calculated according to different criteria. This spares the user much manual work, but still provides sufficient freedom to accommodate application‐dependent layout constraints. Our approach utilizes the advantages of multi‐touch gestures, and is also compatible with classic mouse and keyboard interaction. Preliminary user tests have been conducted with researchers from bio‐informatics who need to manually maintain a slowly, but constantly growing molecular network. As the user feedback indicates, our solution significantly improves the editing procedure applied so far.  相似文献   

14.
As collections of 2D/3D images continue to grow, interest in effective ways to visualize and explore the statistical morphological properties of a group of images has surged. Recently, deformation models have emerged as simple methods to capture the variability and statistical properties of a collection of images. Such models have proven to be effective in tasks such as image classification, generation, registration, segmentation, and analysis of modes of variation. A crucial element missing from most statistical models has been an effective way to summarize and visualize the statistical morphological properties of a group of images. This paper evaluates different visualization techniques that can be extended and used to illustrate the information captured by such statistical models. First, four illustration techniques are described as methods to summarize the statistical morphological properties as captured by deformation models. Second, results of a user study conducted to compare the effectiveness of each visualization technique are presented. After comparing the performance of 40 subjects, we found that statistical annotation techniques present significant benefits when analyzing the structural properties of a group of images.  相似文献   

15.
Captured reflectance fields tend to provide a relatively coarse sampling of the incident light directions. As a result, sharp illumination features, such as highlights or shadow boundaries, are poorly reconstructed during relighting; highlights are disconnected, and shadows show banding artefacts. In this paper, we propose a novel interpolation technique for 4D reflectance fields that reconstructs plausible images even for non-observed light directions. Given a sparsely sampled reflectance field, we can effectively synthesize images as they would have been obtained from denser sampling. The processing pipeline consists of three steps: (1) segmentation of regions where intermediate lighting cannot be obtained by blending, (2) appropriate flow algorithms for highlights and shadows, plus (3) a final reconstruction technique that uses image-based priors to faithfully correct errors that might be introduced by the segmentation or flow step. The algorithm reliably reproduces scenes that contain specular highlights, interreflections, shadows or caustics.  相似文献   

16.
Segmenting a moving foreground (fg) from its background (bg) is a fundamental step in many Machine Vision and Computer Graphics applications. Nevertheless, hardly any attempts have been made to tackle this problem in dynamic 3D scanned scenes. Scanned dynamic scenes are typically challenging due to noise and large missing parts. Here, we present a novel approach for motion segmentation in dynamic point‐cloud scenes designed to cater to the unique properties of such data. Our key idea is to augment fg/bg classification with an active learning framework by refining the segmentation process in an adaptive manner. Our method initially classifies the scene points as either fg or bg in an un‐supervised manner. This, by training discriminative RBF‐SVM classifiers on automatically labeled, high‐certainty fg/bg points. Next, we adaptively detect unreliable classification regions (i.e. where fg/bg separation is uncertain), locally add more training examples to better capture the motion in these areas, and re‐train the classifiers to fine‐tune the segmentation. This not only improves segmentation accuracy, but also allows our method to perform in a coarse‐to‐fine manner, thereby efficiently process high‐density point‐clouds. Additionally, we present a unique interactive paradigm for enhancing this learning process, by using a manual editing tool. The user explicitly edits the RBF‐SVM decision borders in unreliable regions in order to refine and correct the classification. We provide extensive qualitative and quantitative experiments on both real (scanned) and synthetic dynamic scenes.  相似文献   

17.
We present the design of an interactive image‐based modeling tool that enables a user to quickly generate detailed 3D models with texture from a set of calibrated input images. Our main contribution is an intuitive user interface that is entirely based on simple 2D painting operations and does not require any technical expertise by the user or difficult pre‐processing of the input images. One central component of our tool is a GPU‐based multi‐view stereo reconstruction scheme, which is implemented by an incremental algorithm, that runs in the background during user interaction so that the user does not notice any significant response delay.  相似文献   

18.
We present ‘Smart Scribbles’—a new scribble‐based interface for user‐guided segmentation of digital sketchy drawings. In contrast to previous approaches based on simple selection strategies, Smart Scribbles exploits richer geometric and temporal information, resulting in a more intuitive segmentation interface. We introduce a novel energy minimization formulation in which both geometric and temporal information from digital input devices is used to define stroke‐to‐stroke and scribble‐to‐stroke relationships. Although the minimization of this energy is, in general, an NP‐hard problem, we use a simple heuristic that leads to a good approximation and permits an interactive system able to produce accurate labellings even for cluttered sketchy drawings. We demonstrate the power of our technique in several practical scenarios such as sketch editing, as‐rigid‐as‐possible deformation and registration, and on‐the‐fly labelling based on pre‐classified guidelines.  相似文献   

19.
We propose a novel rendering method which supports interactive BRDF editing as well as relighting on a 3D scene. For interactive BRDF editing, we linearize an analytic BRDF model with basis BRDFs obtained from a principal component analysis. For each basis BRDF, the radiance transfer is precomputed and stored in vector form. In rendering time, illumination of a point is computed by multiplying the radiance transfer vectors of the basis BRDFs by the incoming radiance from gather samples and then linearly combining the results weighted by user‐controlled parameters. To improve the level of accuracy, a set of sub‐area samples associated with a gather sample refines the glossy reflection of the geometric details without increasing the precomputation time. We demonstrate this program with a number of examples to verify the real‐time performance of relighting and BRDF editing on 3D scenes with complex lighting and geometry.  相似文献   

20.
We present a novel computational framework for physically and chemically‐based simulations of analog alternative photographic processes. In the real world, these processes allow the creation of very personal and unique depictions due to the combination of the chemicals used, the physical interaction with liquid solutions, and the individual craftsmanship of the artist. Our work focuses not only on achieving similar compelling results, but on the manual process as well, introducing a novel exploratory approach for interactive digital image creation and manipulation. With such an emphasis on the user interaction, our simulations are devised to run on tablet devices; thus we propose the combination of a lightweight data‐driven model to simulate the chemical reactions involved, with efficient fluids simulations that modulate them. This combination allows realistic gestures‐based user interaction with constant visual feedback in real‐time. Using the proposed framework, we have built two prototypes with different tradeoffs between realism and flexibility, showing its potential to build novel image editing tools.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号