首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We describe an algorithm for generating panoramic video from unstructured camera arrays. Artifact‐free panorama stitching is impeded by parallax between input views. Common strategies such as multi‐level blending or minimum energy seams produce seamless results on quasi‐static input. However, on video input these approaches introduce noticeable visual artifacts due to lack of global temporal and spatial coherence. In this paper we extend the basic concept of local warping for parallax removal. Firstly, we introduce an error measure with increased sensitivity to stitching artifacts in regions with pronounced structure. Using this measure, our method efficiently finds an optimal ordering of pair‐wise warps for robust stitching with minimal parallax artifacts. Weighted extrapolation of warps in non‐overlap regions ensures temporal stability, while at the same time avoiding visual discontinuities around transitions between views. Remaining global deformation introduced by the warps is spread over the entire panorama domain using constrained relaxation, while staying as close as possible to the original input views. In combination, these contributions form the first system for spatiotemporally stable panoramic video stitching from unstructured camera array input.  相似文献   

2.
It is a challenging task for ordinary users to capture selfies with a good scene composition, given the limited freedom to position the camera. Creative hardware (e.g., selfie sticks) and software (e.g., panoramic selfie apps) solutions have been proposed to extend the background coverage of a selife, but to achieve a perfect composition on the spot when the selfie is captured remains to be difficult. In this paper, we propose a system that allows the user to shoot a selfie video by rotating the body first, then produce a final panoramic selfie image with user‐guided scene composition as postprocessing. Our key technical contribution is a fully Automatic, robust multi‐frame segmentation and stitching framework that is tailored towards the special characteristics of selfie images. We analyze the sparse feature points and employ a spatial‐temporal optimization for bilayer feature segmentation, which leads to more reliable background alignment than previous image stitching techniques. The sparse classification is then propagated to all pixels to create dense foreground masks for person‐background composition. Finally, based on a user‐selected foreground position, our system uses content‐preserving warping to produce a panoramic seflie with minimal distortion to the face region. Experimental results show that our approach can reliably generate high quality panoramic selfies, while a simple combination of previous image stitching and segmentation approaches often fails.  相似文献   

3.
We present a novel stereo‐to‐multiview video conversion method for glasses‐free multiview displays. Different from previous stereo‐to‐multiview approaches, our mapping algorithm utilizes the limited depth range of autostereoscopic displays optimally and strives to preserve the scene's artistic composition and perceived depth even under strong depth compression. We first present an investigation of how perceived image quality relates to spatial frequency and disparity. The outcome of this study is utilized in a two‐step mapping algorithm, where we (i) compress the scene depth using a non‐linear global function to the depth range of an autostereoscopic display and (ii) enhance the depth gradients of salient objects to restore the perceived depth and salient scene structure. Finally, an adapted image domain warping algorithm is proposed to generate the multiview output, which enables overall disparity range extension.  相似文献   

4.
In virtual reality (VR) applications, the contents are usually generated by creating a 360° Video panorama of a real‐world scene. Although many capture devices are being released, getting high‐resolution panoramas and displaying a virtual world in real‐time remains challenging due to its computationally demanding nature. In this paper, we propose a real‐time 360° Video foveated stitching framework, that renders the entire scene in different level of detail, aiming to create a high‐resolution panoramic Video in real‐time that can be streamed directly to the client. Our foveated stitching algorithm takes Videos from multiple cameras as input, combined with measurements of human visual attention (i.e. the acuity map and the saliency map), can greatly reduce the number of pixels to be processed. We further parallelize the algorithm using GPU to achieve a responsive interface and validate our results via a user study. Our system accelerates graphics computation by a factor of 6 on a Google Cardboard display.  相似文献   

5.
Interactive isosurface visualisation has been made possible by mapping algorithms to GPU architectures. However, current state‐of‐the‐art isosurfacing algorithms usually consume large amounts of GPU memory owing to the additional acceleration structures they require. As a result, the continued limitations on available GPU memory mean that they are unable to deal with the larger datasets that are now increasingly becoming prevalent. This paper proposes a new parallel isosurface‐extraction algorithm that exploits the blocked organisation of the parallel threads found in modern many‐core platforms to achieve fast isosurface extraction and reduce the associated memory requirements. This is achieved by optimising thread co‐operation within thread‐blocks and reducing redundant computation; ultimately, an indexed triangular mesh can be produced. Experiments have shown that the proposed algorithm is much faster (up to 10×) than state‐of‐the‐art GPU algorithms and has a much smaller memory footprint, enabling it to handle much larger datasets (up to 64×) on the same GPU.  相似文献   

6.
The visual analysis of multivariate projections is a challenging task, because complex visual structures occur. This causes fatigue or misinterpretations, which distorts the analysis. In fact, the same projection can lead to different analysis results. We provide visual guidance pictograms to improve objectivity of the visual search. A visual guidance pictogram is an iconic visual density map encoding the visual structure of certain data properties. By using them to guide the analysis, structures in the projection can be better understood and mentally linked to properties in the data. We introduce a systematic scheme for designing such pictograms and provide a set of pictograms for standard visual tasks, such as correlation and distribution analysis, for standard projections like scatterplots, RadVis, and Star Coordinates. We conduct a study that compares the visual analysis of real data with and without the support of guidance pictograms. Our tests show that the training effort for a visual search can be decreased and the analysis bias can be reduced by supporting the user's visual search with guidance pictograms.  相似文献   

7.
We describe a painting machine and associated algorithms. Our modified industrial robot works with visual feedback and applies acrylic paint from a repository to a canvas until the created painting resembles a given input image or scene. The color differences between canvas and input are used to direct the application of new strokes. We present two optimization‐based algorithms that place such strokes in relation to already existing ones. Using these methods we are able to create different painting styles, one that tries to match the input colors with almost transparent strokes and another one that creates dithering patterns of opaque strokes that approximate the input color. The machine produces paintings that mimic those created by human painters and allows us to study the painting process as well as the creation of artworks.  相似文献   

8.
In this paper, we present a flexible and efficient approach for the integration of order‐independent transparency into a deferred shading pipeline. The intermediate buffers for storing fragments to be shaded are extended with a dynamic and memory‐efficient storage for transparent fragments. The transparency of an object is not fixed and remains programmable until fragment processing, which allows for the implementation of advanced materials effects, interaction techniques or adaptive fade‐outs. Traversing costs for shading the transparent fragments are greatly reduced by introducing a tile‐based light‐culling pass. During deferred shading, opaque and transparent fragments are shaded and composited in front‐to‐back order using the retrieved lighting information and a physically‐based shading model. In addition, we discuss various configurations of the system and further enhancements. Our results show that the system performs at interactive frame rates even for complex scenarios.  相似文献   

9.
We present a novel algorithm to reconstruct high‐quality images from sampled pixels and gradients in gradient‐domain Rendering. Our approach extends screened Poisson reconstruction by adding additional regularization constraints. Our key idea is to exploit local patches in feature images, which contain per‐pixels normals, textures, position, etc., to formulate these constraints. We describe a GPU implementation of our approach that runs on the order of seconds on megapixel images. We demonstrate a significant improvement in image quality over screened Poisson reconstruction under the L1 norm. Because we adapt the regularization constraints to the noise level in the input, our algorithm is consistent and converges to the ground truth.  相似文献   

10.
Mobile phones and tablets are rapidly gaining significance as omnipresent image and video capture devices. In this context we present an algorithm that allows such devices to capture high dynamic range (HDR) video. The design of the algorithm was informed by a perceptual study that assesses the relative importance of motion and dynamic range. We found that ghosting artefacts are more visually disturbing than a reduction in dynamic range, even if a comparable number of pixels is affected by each. We incorporated these findings into a real‐time, adaptive metering algorithm that seamlessly adjusts its settings to take exposures that will lead to minimal visual artefacts after recombination into an HDR sequence. It is uniquely suitable for real‐time selection of exposure settings. Finally, we present an off‐line HDR reconstruction algorithm that is matched to the adaptive nature of our real‐time metering approach.  相似文献   

11.
Light field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off‐line process, i. e., time between initial capture and final display is far from real‐time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off‐the‐shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi‐resolution Lucas‐Kanade correspondence algorithm from a pair of images to an entire array. Special inter‐image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state‐of‐the art light field‐to‐depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays.  相似文献   

12.
Attention‐based Level‐Of–Detail (LOD) managers downgrade the quality of areas that are expected to go unnoticed by an observer to economize on computational resources. The perceptibility of lowered visual fidelity is determined by the accuracy of the attention model that assigns quality levels. Most previous attention based LOD managers do not take into account saliency provoked by context, failing to provide consistently accurate attention predictions. In this work, we extend a recent high level saliency model with four additional components yielding more accurate predictions: an object‐intrinsic factor accounting for canonical form of objects, an object‐context factor for contextual isolation of objects, a feature uniqueness term that accounts for the number of salient features in an image, and a temporal context that generates recurring fixations for objects inconsistent with the context. We conduct a perceptual experiment to acquire the weighting factors to initialize our model. We design C‐LOD, a LOD manager that maintains a constant frame rate on mobile devices by dynamically re‐adjusting material quality on secondary visual features of non‐attended objects. In a proof of concept study we establish that by incorporating C‐LOD, complex effects such as parallax occlusion mapping usually omitted in mobile devices can now be employed, without overloading GPU capability and, at the same time, conserving battery power.  相似文献   

13.
Empty‐space skipping is an essential acceleration technique for volume rendering. Image‐order empty‐space skipping is not well suited to GPU implementation, since it must perform checks on, essentially, a per‐sample basis, as in kd‐tree traversal, which can lead to a great deal of divergent branching at runtime, which is very expensive in a modern GPU pipeline. In contrast, object‐order empty‐space skipping is extremely fast on a GPU and has negligible overheads compared with approaches without empty‐space skipping, since it employs the hardware unit for rasterisation. However, previous object‐order algorithms have been able to skip only exterior empty space and not the interior empty space that lies inside or between volume objects. In this paper, we address these issues by proposing a multi‐layer depth‐peeling approach that can obtain all of the depth layers of the tight‐fitting bounding geometry of the isosurface by a single rasterising pass. The maximum count of layers peeled by our approach can be up to thousands, while maintaining 32‐bit float‐point accuracy, which was not possible previously. By raytracing only the valid ray segments between each consecutive pair of depth layers, we can skip both the interior and exterior empty space efficiently. In comparisons with 3 state‐of‐the‐art GPU isosurface rendering algorithms, this technique achieved much faster rendering across a variety of data sets.  相似文献   

14.
Numerous algorithms have been researched in the area of texture synthesis. However, it remains difficult to design a low‐cost synthesis scheme capable of generating high quality results while simultaneously achieving real‐time performance. Additional challenges include making a scheme parallel and being able to partially render/synthesize high‐resolution textures. Furthermore, it would be beneficial for a synthesis scheme to be able to incorporate Texture Compression and minimize the bandwidth usage, especially on mobile devices. In this paper, we propose a practical method which has low computational complexity and produces textures with small storage requirements. Through use of an index table, random access of the texture is another essential advantage, with which parallel rendering becomes feasible including generation of mip‐map sequences. Integrating the index table with existing compression algorithms, for example ETC or PVRTC, the bandwidth is further reduced and avoids the need for a separate, computationally expensive pass to compress the synthesized output. It should be noted that our texture synthesis achieves real‐time performance and low power consumption even on mobile devices, for which texture synthesis has been traditionally considered too expensive.  相似文献   

15.
Gradient‐domain compositing has been widely used to create a seamless composite with gradient close to a composite gradient field generated from one or more registered images. The key to this problem is to solve a Poisson equation, whose unknown variables can reach the size of the composite if no region of interest is drawn explicitly, thus making both the time and memory cost expensive in processing multi‐megapixel images. In this paper, we propose an approximate projection method based on biorthogonal Multiresolution Analyses (MRA) to solve the Poisson equation. Unlike previous Poisson equation solvers which try to converge to the accurate solution with iterative algorithms, we use biorthogonal compactly supported curl‐free wavelets as the fundamental bases to approximately project the composite gradient field onto a curl‐free vector space. Then, the composite can be efficiently recovered by applying a fast inverse wavelet transform. Considering an n‐pixel composite, our method only requires 2n of memory for all vector fields and is more efficient than state‐of‐the‐art methods while achieving almost identical results. Specifically, experiments show that our method gains a 5× speedup over the streaming multigrid in certain cases.  相似文献   

16.
We propose a framework for data‐driven manipulation and synthesis of component‐based vector graphics. Using labelled vector graphical images of a given type of object as input, our processing pipeline produces training data, learns a probabilistic Bayesian network from that training data, and offer various data‐driven vector‐related tools using synthesis functions. The tools ranges from data‐driven vector design to automatic synthesis of vector graphics. Our tools were well received by designers, our model provides good generalisation performance, also from small data sets, and our method for synthesis produces vector graphics deemed significantly more plausible compared with alternative methods.  相似文献   

17.
When human luminance perception operates close to its absolute threshold, i. e., the lowest perceivable absolute values, appearance changes substantially compared to common photopic or scotopic vision. In particular, most observers report perceiving temporally‐varying noise. Two reasons are physiologically plausible; quantum noise (due to the low absolute number of photons) and spontaneous photochemical reactions. Previously, static noise with a normal distribution and no account for absolute values was combined with blue hue shift and blur to simulate scotopic appearance on a photopic display for movies and interactive applications (e.g., games). We present a computational model to reproduce the specific distribution and dynamics of “scotopic noise” for specific absolute values. It automatically introduces a perceptually‐calibrated amount of noise for a specific luminance level and supports animated imagery. Our simulation runs in milliseconds at HD resolution using graphics hardware and favorably compares to simpler alternatives in a perceptual experiment.  相似文献   

18.
In this paper, we propose an interactive technique for constructing a 3D scene via sparse user inputs. We represent a 3D scene in the form of a Layered Depth Image (LDI) which is composed of a foreground layer and a background layer, and each layer has a corresponding texture and depth map. Given user‐specified sparse depth inputs, depth maps are computed based on superpixels using interpolation with geodesic‐distance weighting and an optimization framework. This computation is done immediately, which allows the user to edit the LDI interactively. Additionally, our technique automatically estimates depth and texture in occluded regions using the depth discontinuity. In our interface, the user paints strokes on the 3D model directly. The drawn strokes serve as 3D handles with which the user can pull out or push the 3D surface easily and intuitively with real‐time feedback. We show our technique enables efficient modeling of LDI that produce sufficient 3D effects.  相似文献   

19.
20.
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine‐tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer‐based approach for visibility specification is valuable and effective for both, scientific and educational purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号