首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We describe an algorithm for generating panoramic video from unstructured camera arrays. Artifact‐free panorama stitching is impeded by parallax between input views. Common strategies such as multi‐level blending or minimum energy seams produce seamless results on quasi‐static input. However, on video input these approaches introduce noticeable visual artifacts due to lack of global temporal and spatial coherence. In this paper we extend the basic concept of local warping for parallax removal. Firstly, we introduce an error measure with increased sensitivity to stitching artifacts in regions with pronounced structure. Using this measure, our method efficiently finds an optimal ordering of pair‐wise warps for robust stitching with minimal parallax artifacts. Weighted extrapolation of warps in non‐overlap regions ensures temporal stability, while at the same time avoiding visual discontinuities around transitions between views. Remaining global deformation introduced by the warps is spread over the entire panorama domain using constrained relaxation, while staying as close as possible to the original input views. In combination, these contributions form the first system for spatiotemporally stable panoramic video stitching from unstructured camera array input.  相似文献   

2.
Images/videos captured by portable devices (e.g., cellphones, DV cameras) often have limited fields of view. Image stitching, also referred to as mosaics or panorama, can produce a wide angle image by compositing several photographs together. Although various methods have been developed for image stitching in recent years, few works address the video stitching problem. In this paper, we present the first system to stitch videos captured by hand‐held cameras. We first recover the 3D camera paths and a sparse set of 3D scene points using CoSLAM system, and densely reconstruct the 3D scene in the overlapping regions. Then, we generate a smooth virtual camera path, which stays in the middle of the original paths. Finally, the stitched video is synthesized along the virtual path as if it was taken from this new trajectory. The warping required for the stitching is obtained by optimizing over both temporal stability and alignment quality, while leveraging on 3D information at our disposal. The experiments show that our method can produce high quality stitching results for various challenging scenarios.  相似文献   

3.
Interactive isosurface visualisation has been made possible by mapping algorithms to GPU architectures. However, current state‐of‐the‐art isosurfacing algorithms usually consume large amounts of GPU memory owing to the additional acceleration structures they require. As a result, the continued limitations on available GPU memory mean that they are unable to deal with the larger datasets that are now increasingly becoming prevalent. This paper proposes a new parallel isosurface‐extraction algorithm that exploits the blocked organisation of the parallel threads found in modern many‐core platforms to achieve fast isosurface extraction and reduce the associated memory requirements. This is achieved by optimising thread co‐operation within thread‐blocks and reducing redundant computation; ultimately, an indexed triangular mesh can be produced. Experiments have shown that the proposed algorithm is much faster (up to 10×) than state‐of‐the‐art GPU algorithms and has a much smaller memory footprint, enabling it to handle much larger datasets (up to 64×) on the same GPU.  相似文献   

4.
It is a challenging task for ordinary users to capture selfies with a good scene composition, given the limited freedom to position the camera. Creative hardware (e.g., selfie sticks) and software (e.g., panoramic selfie apps) solutions have been proposed to extend the background coverage of a selife, but to achieve a perfect composition on the spot when the selfie is captured remains to be difficult. In this paper, we propose a system that allows the user to shoot a selfie video by rotating the body first, then produce a final panoramic selfie image with user‐guided scene composition as postprocessing. Our key technical contribution is a fully Automatic, robust multi‐frame segmentation and stitching framework that is tailored towards the special characteristics of selfie images. We analyze the sparse feature points and employ a spatial‐temporal optimization for bilayer feature segmentation, which leads to more reliable background alignment than previous image stitching techniques. The sparse classification is then propagated to all pixels to create dense foreground masks for person‐background composition. Finally, based on a user‐selected foreground position, our system uses content‐preserving warping to produce a panoramic seflie with minimal distortion to the face region. Experimental results show that our approach can reliably generate high quality panoramic selfies, while a simple combination of previous image stitching and segmentation approaches often fails.  相似文献   

5.
Mobile phones and tablets are rapidly gaining significance as omnipresent image and video capture devices. In this context we present an algorithm that allows such devices to capture high dynamic range (HDR) video. The design of the algorithm was informed by a perceptual study that assesses the relative importance of motion and dynamic range. We found that ghosting artefacts are more visually disturbing than a reduction in dynamic range, even if a comparable number of pixels is affected by each. We incorporated these findings into a real‐time, adaptive metering algorithm that seamlessly adjusts its settings to take exposures that will lead to minimal visual artefacts after recombination into an HDR sequence. It is uniquely suitable for real‐time selection of exposure settings. Finally, we present an off‐line HDR reconstruction algorithm that is matched to the adaptive nature of our real‐time metering approach.  相似文献   

6.
The visual analysis of multivariate projections is a challenging task, because complex visual structures occur. This causes fatigue or misinterpretations, which distorts the analysis. In fact, the same projection can lead to different analysis results. We provide visual guidance pictograms to improve objectivity of the visual search. A visual guidance pictogram is an iconic visual density map encoding the visual structure of certain data properties. By using them to guide the analysis, structures in the projection can be better understood and mentally linked to properties in the data. We introduce a systematic scheme for designing such pictograms and provide a set of pictograms for standard visual tasks, such as correlation and distribution analysis, for standard projections like scatterplots, RadVis, and Star Coordinates. We conduct a study that compares the visual analysis of real data with and without the support of guidance pictograms. Our tests show that the training effort for a visual search can be decreased and the analysis bias can be reduced by supporting the user's visual search with guidance pictograms.  相似文献   

7.
Computerized route planning tools are widely used today by travelers all around the globe, while 3D terrain and urban models are becoming increasingly elaborate and abundant. This makes it feasible to generate a virtual 3D flyby along a planned route. Such a flyby may be useful, either as a preview of the trip, or as an after‐the‐fact visual summary. However, a naively generated preview is likely to contain many boring portions, while skipping too quickly over areas worthy of attention. In this paper, we introduce 3D trip synopsis: a continuous visual summary of a trip that attempts to maximize the total amount of visual interest seen by the camera. The main challenge is to generate a synopsis of a prescribed short duration, while ensuring a visually smooth camera motion. Using an application‐specific visual interest metric, we measure the visual interest at a set of viewpoints along an initial camera path, and maximize the amount of visual interest seen in the synopsis by varying the speed along the route. A new camera path is then computed using optimization to simultaneously satisfy requirements, such as smoothness, focus and distance to the route. The process is repeated until convergence. The main technical contribution of this work is a new camera control method, which iteratively adjusts the camera trajectory and determines all of the camera trajectory parameters, including the camera position, altitude, heading, and tilt. Our results demonstrate the effectiveness of our trip synopses, compared to a number of alternatives.  相似文献   

8.
Visual formats have advanced beyond single‐view images and videos: 3D movies are commonplace, researchers have developed multi‐view navigation systems, and VR is helping to push light field cameras to mass market. However, editing tools for these media are still nascent, and even simple filtering operations like color correction or stylization are problematic: naively applying image filters per frame or per view rarely produces satisfying results due to time and space inconsistencies. Our method preserves and stabilizes filter effects while being agnostic to the inner working of the filter. It captures filter effects in the gradient domain, then uses input frame gradients as a reference to impose temporal and spatial consistency. Our least‐squares formulation adds minimal overhead compared to naive data processing. Further, when filter cost is high, we introduce a filter transfer strategy that reduces the number of per‐frame filtering computations by an order of magnitude, with only a small reduction in visual quality. We demonstrate our algorithm on several camera array formats including stereo videos, light fields, and wide baselines.  相似文献   

9.
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine‐tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer‐based approach for visibility specification is valuable and effective for both, scientific and educational purposes.  相似文献   

10.
Color is one of the most effective visual variables and is frequently used to encode metric quantities. Contrast effects are considered harmful in data visualizations since they significantly bias our perception of colors. For instance, a gray patch appears brighter on a black background than on a white background. Accordingly, the perception of color‐encoded data items depends on the surround in the rendered visualization. A method that compensates for contrast effects has been presented previously, which significantly improves the users’ accuracy in reading and comparing color encoded data. The method utilizes established perception models to compensate for contrast effects, assuming an average human observer. In this paper, we provide experiments that show a significant difference in the perception of users. We introduce methods to personalize contrast effect compensation and show that this outperforms the original method with a user study. We, further, overcome the major limitation of the original method, which is a runtime of several minutes. With the use of efficient optimization and surrogate models, we are able to reduce runtime to milliseconds, making the method applicable in interactive visualizations.  相似文献   

11.
When human luminance perception operates close to its absolute threshold, i. e., the lowest perceivable absolute values, appearance changes substantially compared to common photopic or scotopic vision. In particular, most observers report perceiving temporally‐varying noise. Two reasons are physiologically plausible; quantum noise (due to the low absolute number of photons) and spontaneous photochemical reactions. Previously, static noise with a normal distribution and no account for absolute values was combined with blue hue shift and blur to simulate scotopic appearance on a photopic display for movies and interactive applications (e.g., games). We present a computational model to reproduce the specific distribution and dynamics of “scotopic noise” for specific absolute values. It automatically introduces a perceptually‐calibrated amount of noise for a specific luminance level and supports animated imagery. Our simulation runs in milliseconds at HD resolution using graphics hardware and favorably compares to simpler alternatives in a perceptual experiment.  相似文献   

12.
In this paper, we present a flexible and efficient approach for the integration of order‐independent transparency into a deferred shading pipeline. The intermediate buffers for storing fragments to be shaded are extended with a dynamic and memory‐efficient storage for transparent fragments. The transparency of an object is not fixed and remains programmable until fragment processing, which allows for the implementation of advanced materials effects, interaction techniques or adaptive fade‐outs. Traversing costs for shading the transparent fragments are greatly reduced by introducing a tile‐based light‐culling pass. During deferred shading, opaque and transparent fragments are shaded and composited in front‐to‐back order using the retrieved lighting information and a physically‐based shading model. In addition, we discuss various configurations of the system and further enhancements. Our results show that the system performs at interactive frame rates even for complex scenarios.  相似文献   

13.
Convincing manipulation of objects in live action videos is a difficult and often tedious task. Skilled video editors achieve this with the help of modern professional tools, but complex motions might still lack physical realism since existing tools do not consider the laws of physics. On the other hand, physically based simulation promises a high degree of realism, but typically creates a virtual 3D scene animation rather than returning an edited version of an input live action video. We propose a framework that combines video editing and physics‐based simulation. Our tool assists unskilled users in editing an input image or video while respecting the laws of physics and also leveraging the image content. We first fit a physically based simulation that approximates the object's motion in the input video. We then allow the user to edit the physical parameters of the object, generating a new physical behavior for it. The core of our work is the formulation of an image‐aware constraint within physics simulations. This constraint manifests as external control forces to guide the object in a way that encourages proper texturing at every frame, yet producing physically plausible motions. We demonstrate the generality of our method on a variety of physical interactions: rigid motion, multi‐body collisions, clothes and elastic bodies.  相似文献   

14.
We present an efficient ray‐tracing technique to render bokeh effects produced by parametric aspheric lenses. Contrary to conventional spherical lenses, aspheric lenses do generally not permit a simple closed‐form solution of ray‐surface intersections. We propose a numerical root‐finding approach, which uses tight proxy surfaces to ensure a good initialization and convergence behavior. Additionally, we simulate mechanical imperfections resulting from the lens fabrication via a texture‐based approach. Fractional Fourier transform and spectral dispersion add additional realism to the synthesized bokeh effect. Our approach is well‐suited for execution on graphics processing units (GPUs) and we demonstrate complex defocus‐blur and lens‐flare effects.  相似文献   

15.
We present a novel algorithm to reconstruct high‐quality images from sampled pixels and gradients in gradient‐domain Rendering. Our approach extends screened Poisson reconstruction by adding additional regularization constraints. Our key idea is to exploit local patches in feature images, which contain per‐pixels normals, textures, position, etc., to formulate these constraints. We describe a GPU implementation of our approach that runs on the order of seconds on megapixel images. We demonstrate a significant improvement in image quality over screened Poisson reconstruction under the L1 norm. Because we adapt the regularization constraints to the noise level in the input, our algorithm is consistent and converges to the ground truth.  相似文献   

16.
The study of face alignment has been an area of intense research in computer vision, with its achievements widely used in computer graphics applications. The performance of various face alignment methods is often image‐dependent or somewhat random because of their own strategy. This study aims to develop a method that can select an input image with good face alignment results from many results produced by a single method or multiple ones. The task is challenging because different face alignment results need to be evaluated without any ground truth. This study addresses this problem by designing a feasible feature extraction scheme to measure the quality of face alignment results. The feature is then used in various machine learning algorithms to rank different face alignment results. Our experiments show that our method is promising for ranking face alignment results and is able to pick good face alignment results, which can enhance the overall performance of a face alignment method with a random strategy. We demonstrate the usefulness of our ranking‐enhanced face alignment algorithm in two practical applications: face cartoon stylization and digital face makeup.  相似文献   

17.
This paper presents an efficient approach for generating weathering effects with detailed appearance variations in a single image. Previous approaches merely change chroma or reflectance of weathered objects, which is not sufficient for materials with detailed shading and texture variations, such as growing moss and peeling plaster. Our method propagates such detailed features via seamless patch‐based synthesis driven by weathering degree distribution. Unlike previous methods, the weathering degrees are calculated efficiently using Radial Basis Functions even for materials with wide color variations. We use graph cut‐based optimization to identify the most weathered region as a “weathering exemplar”, from which we sample weathering patches. We demonstrate our method enables us to generate various types of detailed weathering effects interactively.  相似文献   

18.
We propose a new technique for in‐core and out‐of‐core GPU ray tracing using a generalization of hierarchical occlusion culling in the style of the CHC++ method. Our method exploits the rasterization pipeline and hardware occlusion queries in order to create coherent batches of work for localized shader‐based ray tracing kernels. By combining hierarchies in both ray space and object space, the method is able to share intermediate traversal results among multiple rays. We exploit temporal coherence among similar ray sets between frames and also within the given frame. A suitable management of the current visibility state makes it possible to benefit from occlusion culling for less coherent ray types like diffuse reflections. Since large scenes are still a challenge for modern GPU ray tracers, our method is most useful for scenes with medium to high complexity, especially since our method inherently supports ray tracing highly complex scenes that do not fit in GPU memory. For in‐core scenes our method is comparable to CUDA ray tracing and performs up to 5.94 × better than pure shader‐based ray tracing.  相似文献   

19.
In volume visualization, transfer functions are used to classify the volumetric data and assign optical properties to the voxels. In general, transfer functions are generated in a transfer function space, which is the feature space constructed by data values and properties derived from the data. If volumetric objects have the same or overlapping data values, it would be difficult to separate them in the transfer function space. In this paper, we present a rule‐enhanced transfer function design method that allows important structures of the volume to be more effectively separated and highlighted. We define a set of rules based on the local frequency distribution of volume attributes. A rule‐selection method based on a genetic algorithm is proposed to learn the set of rules that can distinguish the user‐specified target tissue from other tissues. In the rendering stage, voxels satisfying these rules are rendered with higher opacities in order to highlight the target tissue. The proposed method was tested on various volumetric datasets to enhance the visualization of important structures that are difficult to be visualized by traditional transfer function design methods. The results demonstrate the effectiveness of the proposed method.  相似文献   

20.
Fast realistic rendering of objects in scattering media is still a challenging topic in computer graphics. In presence of participating media, a light beam is repeatedly scattered by media particles, changing direction and getting spread out. Explicitly evaluating this beam distribution would enable efficient simulation of multiple scattering events without involving costly stochastic methods. Narrow beam theory provides explicit equations that approximate light propagation in a narrow incident beam. Based on this theory, we propose a closed‐form distribution function for scattered beams. We successfully apply it to the image synthesis of scenes in which scattering occurs, and show that our proposed estimation method is more accurate than those based on the Wentzel‐Kramers‐Brillouin (WKB) theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号