首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Despite their high popularity, common high dynamic range (HDR) methods are still limited in their practical applicability: They assume that the input images are perfectly aligned, which is often violated in practise. Our paper does not only free the user from this unrealistic limitation, but even turns the missing alignment into an advantage: By exploiting the multiple exposures, we can create a super‐resolution image. The alignment step is performed by a modern energy‐based optic flow approach that takes into account the varying exposure conditions. Moreover, it produces dense displacement fields with subpixel precision. As a consequence, our approach can handle arbitrary complex motion patterns, caused by severe camera shake and moving objects. Additionally, it benefits from several advantages over existing strategies: (i) It is robust under outliers (noise, occlusions, saturation problems) and allows for sharp discontinuities in the displacement field. (ii) The alignment step neither requires camera calibration nor knowledge of the exposure times. (iii) It can be efficiently implemented on CPU and GPU architectures. After the alignment is performed, we use the obtained subpixel accurate displacement fields as input for an energy‐based, joint super‐resolution and HDR (SR‐HDR) approach. It introduces robust data terms and anisotropic smoothness terms in the SR‐HDR literature. Our experiments with challenging real world data demonstrate that these novelties are pivotal for the favourable performance of our approach.  相似文献   

2.
Restoration of the photographs damaged by the camera shake is a challenging task that manifested increasing attention in the recent period. Despite of the important progress of the blind deconvolution techniques, due to the ill-posed nature of the problem, the finest details of the kernel blur cannot be recovered entirely. Moreover, the additional constraints and prior assumptions make these approaches to be relative limited.
In this paper we introduce a novel technique that removes the undesired blur artifacts from photographs taken by hand-held digital cameras. Our approach is based on the observation that in general several consecutive photographs taken by the users share image regions that project the same scene content. Therefore, we took advantage of additional sharp photographs of the same scene. Based on several invariant local feature points, filtered from the given blurred/non-blurred images, our approach matches the keypoints and estimates the blur kernel using additional statistical constraints.
We also present a simple deconvolution technique that preserves edges while minimizing the ringing artifacts in the restored latent image. The experimental results prove that our technique is able to infer accurately the blur kernel while reducing significantly the artifacts of the spoilt images.  相似文献   

3.
This paper investigates a new approach for color transfer. Rather than transferring color from one image to another globally, we propose a system with a stroke‐based user interface to provide a direct indication mechanism. We further present a multiple local color transfer method. Through our system the user can easily enhance a defect (source) photo by referring to some other good quality (target) images by simply drawing some strokes. Then, the system will perform the multiple local color transfer automatically. The system consists of two major steps. First, the user draws some strokes on the source and target images to indicate corresponding regions and also the regions he or she wants to preserve. The regions to be preserved which will be masked out based on an improved graph cuts algorithm. Second, a multiple local color transfer method is presented to transfer the color from the target image(s) to the source image through gradient‐guided pixel‐wise color transfer functions. Finally, the defect (source) image can be enhanced seamlessly by multiple local color transfer based on some good quality (target) examples through an interactive and intuitive stroke‐based user interface.  相似文献   

4.
One of the most common tasks in image and video editing is the local adjustment of various properties (e.g., saturation or brightness) of regions within an image or video. Edge‐aware interpolation of user‐drawn scribbles offers a less effort‐intensive approach to this problem than traditional region selection and matting. However, the technique suffers a number of limitations, such as reduced performance in the presence of texture contrast, and the inability to handle fragmented appearances. We significantly improve the performance of edge‐aware interpolation for this problem by adding a boosting‐based classification step that learns to discriminate between the appearance of scribbled pixels. We show that this novel data term in combination with an existing edge‐aware optimization technique achieves substantially better results for the local image and video adjustment problem than edge‐aware interpolation techniques without classification, or related methods such as matting techniques or graph cut segmentation.  相似文献   

5.
This paper proposes an algorithm which uses image registration to estimate a non‐uniform motion blur point spread function (PSF) caused by camera shake. Our study is based on a motion blur model which models blur effects of camera shakes using a set of planar perspective projections (i.e., homographies). This representation can fully describe motions of camera shakes in 3D which cause non‐uniform motion blurs. We transform the non‐uniform PSF estimation problem into a set of image registration problems which estimate homographies of the motion blur model one‐by‐one through the Lucas‐Kanade algorithm. We demonstrate the performance of our algorithm using both synthetic and real world examples. We also discuss the effectiveness and limitations of our algorithm for non‐uniform deblurring.  相似文献   

6.
Annoying shaky motion is one of the significant problems in home videos, since hand shake is an unavoidable effect when capturing by using a hand‐held camcorder. Video stabilization is an important technique to solve this problem, but the stabilized videos resulting from some current methods usually have decreased resolution and are still not so stable. In this paper, we propose a robust and practical method of full‐frame video stabilization while considering user's capturing intention to remove not only the high frequency shaky motions but also the low frequency unexpected movements. To guess the user's capturing intention, we first consider the regions of interest in the video to estimate which regions or objects the user wants to capture, and then use a polyline to estimate a new stable camcorder motion path while avoiding the user's interested regions or objects being cut out. Then, we fill the dynamic and static missing areas caused by frame alignment from other frames to keep the same resolution and quality as the original video. Furthermore, we smooth the discontinuous regions by using a three‐dimensional Poisson‐based method. After the above automatic operations, a full‐frame stabilized video can be achieved and the important regions and objects can also be preserved.  相似文献   

7.
The ABSTRACT is to be in fully-justified italicized text, between two horizontal lines, in one-column format, below the author and affiliation information. Use the word “Abstract” as the title, in 9-point Times, boldface type, left-aligned to the text, initially capitalized. The abstract is to be in 9-point, single-spaced type. The abstract may be up to 3 inches (7.62 cm) long. Leave one blank line after the abstract, then add the subject categories according to the ACM Classification Index (see http://www.acm.org/class/1998/ ).  相似文献   

8.
Image blur caused by object motion attenuates high frequency content of images, making post‐capture deblurring an ill‐posed problem. The recoverable frequency band quickly becomes narrower for faster object motion as high frequencies are severely attenuated and virtually lost. This paper proposes to translate a camera sensor circularly about the optical axis during exposure, so that high frequencies can be preserved for a wide range of in‐plane linear object motion in any direction within some predetermined speed. That is, although no object may be photographed sharply at capture time, differently moving objects captured in a single image can be deconvolved with similar quality. In addition, circular sensor motion is shown to facilitate blur estimation thanks to distinct frequency zero patterns of the resulting motion blur point‐spread functions. An analysis of the frequency characteristics of circular sensor motion in relation to linear object motion is presented, along with deconvolution results for photographs captured with a prototype camera.  相似文献   

9.
Color quantization replaces the color of each pixel with the closest representative color, and thus it makes the resulting image partitioned into uniformly-colored regions. As a consequence, continuous, detailed variations of color over the corresponding regions in the original image are lost through color quantization. In this paper, we present a novel blind scheme for restoring such variations from a color-quantized input image without a priori knowledge of the quantization method. Our scheme identifies which pairs of uniformly-colored regions in the input image should have continuous variations of color in the resulting image. Then, such regions are seamlessly stitched through optimization while preserving the closest representative colors. The user can optionally indicate which regions should be separated or stitched by scribbling constraint brushes across the regions. We demonstrate the effectiveness of our approach through diverse examples, such as photographs, cartoons, and artistic illustrations.  相似文献   

10.
High‐quality video editing usually requires accurate layer separation in order to resolve occlusions. However, most of the existing bilayer segmentation algorithms require either considerable user intervention or a simple stationary camera configuration with known background, which is difficult to meet for many real world online applications. This paper demonstrates that various visually appealing montage effects can be online created from a live video captured by a rotating camera, by accurately retrieving the camera state and segmenting out the dynamic foreground. The key contribution is that a novel fast bilayer segmentation method is proposed which can effectively extract the dynamic foreground under rotational camera configuration, and is robust to imperfect background estimation and complex background colors. Our system can create a variety of live visual effects, including but not limited to, realistic virtual object insertion, background substitution and blurring, non‐photorealistic rendering and camouflage effect. A variety of challenging examples demonstrate the effectiveness of our method.  相似文献   

11.
Diorama artists produce a spectacular 3D effect in a confined space by generating depth illusions that are faithful to the ordering of the objects in a large real or imaginary scene. Indeed, cognitive scientists have discovered that depth perception is mostly affected by depth order and precedence among objects. Motivated by these findings, we employ ordinal cues to construct a model from a single image that similarly to Dioramas, intensifies the depth perception. We demonstrate that such models are sufficient for the creation of realistic 3D visual experiences. The initial step of our technique extracts several relative depth cues that are well known to exist in the human visual system. Next, we integrate the resulting cues to create a coherent surface. We introduce wide slits in the surface, thus generalizing the concept of cardboard cutout layers. Lastly, the surface geometry and texture are extended alongside the slits, to allow small changes in the viewpoint which enriches the depth illusion.  相似文献   

12.
This paper presents methods for photo‐realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4‐D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.  相似文献   

13.
The field of computational photography, and in particular the design and implementation of coded apertures, has yielded impressive results in the last years. In this paper we introduce perceptually optimized coded apertures for defocused deblurring. We obtain near‐optimal apertures by means of optimization, with a novel evaluation function that includes two existing image quality perceptual metrics. These metrics favour results where errors in the final deblurred images will not be perceived by a human observer. Our work improves the results obtained with a similar approach that only takes into account the L2 metric in the evaluation function.  相似文献   

14.
Automatic Conversion of Mesh Animations into Skeleton-based Animations   总被引:1,自引:0,他引:1  
Recently, it has become increasingly popular to represent animations not by means of a classical skeleton‐based model, but in the form of deforming mesh sequences. The reason for this new trend is that novel mesh deformation methods as well as new surface based scene capture techniques offer a great level of flexibility during animation creation. Unfortunately, the resulting scene representation is less compact than skeletal ones and there is not yet a rich toolbox available which enables easy post‐processing and modification of mesh animations. To bridge this gap between the mesh‐based and the skeletal paradigm, we propose a new method that automatically extracts a plausible kinematic skeleton, skeletal motion parameters, as well as surface skinning weights from arbitrary mesh animations. By this means, deforming mesh sequences can be fully‐automatically transformed into fullyrigged virtual subjects. The original input can then be quickly rendered based on the new compact bone and skin representation, and it can be easily modified using the full repertoire of already existing animation tools.  相似文献   

15.
Learning regressors from low‐resolution patches to high‐resolution patches has shown promising results for image super‐resolution. We observe that some regressors are better at dealing with certain cases, and others with different cases. In this paper, we jointly learn a collection of regressors, which collectively yield the smallest super‐resolving error for all training data. After training, each training sample is associated with a label to indicate its ‘best’ regressor, the one yielding the smallest error. During testing, our method bases on the concept of ‘adaptive selection’ to select the most appropriate regressor for each input patch. We assume that similar patches can be super‐resolved by the same regressor and use a fast, approximate kNN approach to transfer the labels of training patches to test patches. The method is conceptually simple and computationally efficient, yet very effective. Experiments on four datasets show that our method outperforms competing methods.  相似文献   

16.
This article focuses on real‐time image correction techniques that enable projector‐camera systems to display images onto screens that are not optimized for projections, such as geometrically complex, coloured and textured surfaces. It reviews hardware‐accelerated methods like pixel‐precise geometric warping, radiometric compensation, multi‐focal projection and the correction of general light modulation effects. Online and offline calibration as well as invisible coding methods are explained. Novel attempts in super‐resolution, high‐dynamic range and high‐speed projection are discussed. These techniques open a variety of new applications for projection displays. Some of them will also be presented in this report.  相似文献   

17.
Realizing unrealistic faces is a complicated task that requires a rich imagination and comprehension of facial structures. When face matching, warping or stitching techniques are applied, existing methods are generally incapable of capturing detailed personal characteristics, are disturbed by block boundary artefacts, or require painting‐photo pairs for training. This paper presents a data‐driven framework to enhance the realism of sketch and portrait paintings based only on photo samples. It retrieves the optimal patches of adaptable shapes and numbers according to the content of the input portrait and collected photos. These patches are then seamlessly stitched by chromatic gain and offset compensation and multi‐level blending. Experiments and user evaluations show that the proposed method is able to generate realistic and novel results for a moderately sized photo collection.  相似文献   

18.
Deformation is a topic of interest in many disciplines. In particular in medical research, deformations of surfaces and even entire volumetric structures are of interest. Clear visualization of such deformations can lead to important insight into growth processes and progression of disease.
We present new techniques for direct focus+context visualization of deformation fields representing transformations between pairs of volumetric datasets. Typically, such fields are computed by performing a non-rigid registration between two data volumes. Our visualization is based on direct volume rendering and uses the GPU to compute and interactively visualize features of these deformation fields in real-time. We integrate visualization of the deformation field with visualization of the scalar volume affected by the deformations. Furthermore, we present a novel use of texturing in volume rendered visualizations to show additional properties of the vector field on surfaces in the volume.  相似文献   

19.
Interactive computation of global illumination is a major challenge in current computer graphics research. Global illumination heavily affects the visual quality of generated images. It is therefore a key attribute for the perception of photo‐realistic images. Path tracing is able to simulate the physical behaviour of light using Monte Carlo techniques. However, the computational burden of this technique prohibits interactive rendering times on standard commodity hardware in high‐quality. Trying to solve the Monte Carlo integration with fewer samples results in characteristic noisy images. Global illumination filtering methods take advantage of the fact that the integral for neighbouring pixels may be very similar. Averaging samples of similar characteristics in screen‐space may approximate the correct integral, but may result in visible outliers. In this paper, we present a novel path tracing pipeline based on an edge‐aware filtering method for the indirect illumination which produces visually more pleasing results without noticeable outliers. The key idea is not to filter the noisy path traced images but to use it as a guidance to filter a second image composed from characteristic scene attributes that do not contain noise by default. We show that our approach better approximates the Monte Carlo integral compared to previous methods. Since the computation is carried out completely in screen‐space it is therefore applicable to fully dynamic scenes, arbitrary lighting and allows for high‐quality path tracing at interactive frame rates on commodity hardware.  相似文献   

20.
Although several new tone‐mapping operators are proposed each year, there is no reliable method to validate their performance or to tell how different they are from one another. In order to analyze and understand the behavior of tone‐mapping operators, we model their mechanisms by fitting a generic operator to an HDR image and its tone‐mapped LDR rendering. We demonstrate that the majority of both global and local tone‐mapping operators can be well approximated by computationally inexpensive image processing operations, such as a per‐pixel tone curve, a modulation transfer function and color saturation adjustment. The results produced by such a generic tone‐mapping algorithm are often visually indistinguishable from much more expensive algorithms, such as the bilateral filter. We show the usefulness of our generic tone‐mapper in backward‐compatible HDR image compression, the black‐box analysis of existing tone mapping algorithms and the synthesis of new algorithms that are combination of existing operators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号