首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
Color quantization replaces the color of each pixel with the closest representative color, and thus it makes the resulting image partitioned into uniformly-colored regions. As a consequence, continuous, detailed variations of color over the corresponding regions in the original image are lost through color quantization. In this paper, we present a novel blind scheme for restoring such variations from a color-quantized input image without a priori knowledge of the quantization method. Our scheme identifies which pairs of uniformly-colored regions in the input image should have continuous variations of color in the resulting image. Then, such regions are seamlessly stitched through optimization while preserving the closest representative colors. The user can optionally indicate which regions should be separated or stitched by scribbling constraint brushes across the regions. We demonstrate the effectiveness of our approach through diverse examples, such as photographs, cartoons, and artistic illustrations.  相似文献   

2.
    
Restoration of the photographs damaged by the camera shake is a challenging task that manifested increasing attention in the recent period. Despite of the important progress of the blind deconvolution techniques, due to the ill-posed nature of the problem, the finest details of the kernel blur cannot be recovered entirely. Moreover, the additional constraints and prior assumptions make these approaches to be relative limited.
In this paper we introduce a novel technique that removes the undesired blur artifacts from photographs taken by hand-held digital cameras. Our approach is based on the observation that in general several consecutive photographs taken by the users share image regions that project the same scene content. Therefore, we took advantage of additional sharp photographs of the same scene. Based on several invariant local feature points, filtered from the given blurred/non-blurred images, our approach matches the keypoints and estimates the blur kernel using additional statistical constraints.
We also present a simple deconvolution technique that preserves edges while minimizing the ringing artifacts in the restored latent image. The experimental results prove that our technique is able to infer accurately the blur kernel while reducing significantly the artifacts of the spoilt images.  相似文献   

3.
    
This article focuses on real‐time image correction techniques that enable projector‐camera systems to display images onto screens that are not optimized for projections, such as geometrically complex, coloured and textured surfaces. It reviews hardware‐accelerated methods like pixel‐precise geometric warping, radiometric compensation, multi‐focal projection and the correction of general light modulation effects. Online and offline calibration as well as invisible coding methods are explained. Novel attempts in super‐resolution, high‐dynamic range and high‐speed projection are discussed. These techniques open a variety of new applications for projection displays. Some of them will also be presented in this report.  相似文献   

4.
We present an alternative approach to create digital camouflage images which follows human's perception intuition and complies with the physical creation procedure of artists. Our method is based on a two‐scale decomposition scheme of the input images. We modify the large‐scale layer of the background image by considering structural importance based on energy optimization and the detail layer by controlling its spatial variation. A gradient correction is presented to prevent halo artifacts. Users can control the difficulty level of perceiving the camouflage effect through a few parameters. Our camouflage images are natural and have less long coherent edges in the hidden region. Experimental results show that our algorithm yields visually pleasing camouflage images.  相似文献   

5.
Creating variations of an image object is an important task, which usually requires manipulating the skeletal structure of the object. However, most existing methods (such as image deformation) only allow for stretching the skeletal structure of an object: modifying skeletal topology remains a challenge. This paper presents a technique for synthesizing image objects with different skeletal structures while respecting to an input image object. To apply this technique, a user firstly annotates the skeletal structure of the input object by specifying a number of strokes in the input image, and draws corresponding strokes in an output domain to generate new skeletal structures. Then, a number of the example texture pieces are sampled along the strokes in the input image and pasted along the strokes in the output domain with their orientations. The result is obtained by optimizing the texture sampling and seam computation. The proposed method is successfully used to synthesize challenging skeletal structures, such as skeletal branches, and a wide range of image objects with various skeletal structures, to demonstrate its effectiveness.  相似文献   

6.
    
Annoying shaky motion is one of the significant problems in home videos, since hand shake is an unavoidable effect when capturing by using a hand‐held camcorder. Video stabilization is an important technique to solve this problem, but the stabilized videos resulting from some current methods usually have decreased resolution and are still not so stable. In this paper, we propose a robust and practical method of full‐frame video stabilization while considering user's capturing intention to remove not only the high frequency shaky motions but also the low frequency unexpected movements. To guess the user's capturing intention, we first consider the regions of interest in the video to estimate which regions or objects the user wants to capture, and then use a polyline to estimate a new stable camcorder motion path while avoiding the user's interested regions or objects being cut out. Then, we fill the dynamic and static missing areas caused by frame alignment from other frames to keep the same resolution and quality as the original video. Furthermore, we smooth the discontinuous regions by using a three‐dimensional Poisson‐based method. After the above automatic operations, a full‐frame stabilized video can be achieved and the important regions and objects can also be preserved.  相似文献   

7.
    
Image completion techniques aim to complete selected regions of an image in a natural looking manner with little or no user interaction. Video Completion, the space–time equivalent of the image completion problem, inherits and extends both the difficulties and the solutions of the original 2D problem, but also imposes new ones—mainly temporal coherency and space complexity (videos contain significantly more information than images). Data‐driven approaches to completion have been established as a favoured choice, especially when large regions have to be filled. In this survey, we present the current state of the art in data‐driven video completion techniques. For unacquainted researchers, we aim to provide a broad yet easy to follow introduction to the subject (including an extensive review of the image completion foundations) and early guidance to the challenges ahead. For a versed reader, we offer a comprehensive review of the contemporary techniques, sectioned out by their approaches to key aspects of the problem.  相似文献   

8.
The ABSTRACT is to be in fully-justified italicized text, between two horizontal lines, in one-column format, below the author and affiliation information. Use the word “Abstract” as the title, in 9-point Times, boldface type, left-aligned to the text, initially capitalized. The abstract is to be in 9-point, single-spaced type. The abstract may be up to 3 inches (7.62 cm) long. Leave one blank line after the abstract, then add the subject categories according to the ACM Classification Index (see http://www.acm.org/class/1998/ ).  相似文献   

9.
Diorama artists produce a spectacular 3D effect in a confined space by generating depth illusions that are faithful to the ordering of the objects in a large real or imaginary scene. Indeed, cognitive scientists have discovered that depth perception is mostly affected by depth order and precedence among objects. Motivated by these findings, we employ ordinal cues to construct a model from a single image that similarly to Dioramas, intensifies the depth perception. We demonstrate that such models are sufficient for the creation of realistic 3D visual experiences. The initial step of our technique extracts several relative depth cues that are well known to exist in the human visual system. Next, we integrate the resulting cues to create a coherent surface. We introduce wide slits in the surface, thus generalizing the concept of cardboard cutout layers. Lastly, the surface geometry and texture are extended alongside the slits, to allow small changes in the viewpoint which enriches the depth illusion.  相似文献   

10.
Image Appearance Exploration by Model-Based Navigation   总被引:1,自引:0,他引:1  
Changing the appearance of an image can be a complex and non-intuitive task. Many times the target image colors and look are only known vaguely and many trials are needed to reach the desired results. Moreover, the effect of a specific change on an image is difficult to envision, since one must take into account spatial image considerations along with the color constraints. Tools provided today by image processing applications can become highly technical and non-intuitive including various gauges and knobs.
In this paper we introduce a method for changing image appearance by navigation, focusing on recoloring images. The user visually navigates a high dimensional space of possible color manipulations of an image. He can either explore in it for inspiration or refine his choices by navigating into sub regions of this space to a specific goal. This navigation is enabled by modeling the chroma channels of an image's colors using a Gaussian Mixture Model (GMM). The Gaussians model both color and spatial image coordinates, and provide a high dimensional parameterization space of a rich variety of color manipulations. The user's actions are translated into transformations of the parameters of the model, which recolor the image. This approach provides both inspiration and intuitive navigation in the complex space of image color manipulations.  相似文献   

11.
    
This paper presents methods for photo‐realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4‐D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.  相似文献   

12.
    
The rendering of large data sets can result in cluttered displays and non‐interactive update rates, leading to time consuming analyses. A straightforward solution is to reduce the number of items, thereby producing an abstraction of the data set. For the visual analysis to remain accurate, the graphical representation of the abstraction must preserve the significant features present in the original data. This paper presents a screen space quality method, based on distance transforms, that measures the visual quality of a data abstraction. This screen space measure is shown to better capture significant visual structures in data, compared with data space measures. The presented method is implemented on the GPU, allowing interactive creation of high quality graphical representations of multivariate data sets containing tens of thousands of items.  相似文献   

13.
Display devices, more than ever, are finding their ways into electronic consumer goods as a result of recent trends in providing more functionality and user interaction. Combined with the new developments in display technology towards higher reproducible luminance range, the mobility and variation in capability of display devices are constantly increasing. Consequently, in real life usage it is now very likely that the display emission to be distorted by spatially and temporally varying reflections, and the observer's visual system to be not adapted to the particular display that she is viewing at that moment. The actual perception of the display content cannot be fully understood by only considering steady-state illumination and adaptation conditions. We propose an objective method for display visibility analysis formulating the problem as a full-reference image quality assessment problem, where the display emission under "ideal" conditions is used as the reference for real-life conditions. Our work includes a human visual system model that accounts for maladaptation and temporal recovery of sensitivity. As an example application we integrate our method to a global illumination simulator and analyze the visibility of a car interior display under realistic lighting conditions.  相似文献   

14.
    
Despite their high popularity, common high dynamic range (HDR) methods are still limited in their practical applicability: They assume that the input images are perfectly aligned, which is often violated in practise. Our paper does not only free the user from this unrealistic limitation, but even turns the missing alignment into an advantage: By exploiting the multiple exposures, we can create a super‐resolution image. The alignment step is performed by a modern energy‐based optic flow approach that takes into account the varying exposure conditions. Moreover, it produces dense displacement fields with subpixel precision. As a consequence, our approach can handle arbitrary complex motion patterns, caused by severe camera shake and moving objects. Additionally, it benefits from several advantages over existing strategies: (i) It is robust under outliers (noise, occlusions, saturation problems) and allows for sharp discontinuities in the displacement field. (ii) The alignment step neither requires camera calibration nor knowledge of the exposure times. (iii) It can be efficiently implemented on CPU and GPU architectures. After the alignment is performed, we use the obtained subpixel accurate displacement fields as input for an energy‐based, joint super‐resolution and HDR (SR‐HDR) approach. It introduces robust data terms and anisotropic smoothness terms in the SR‐HDR literature. Our experiments with challenging real world data demonstrate that these novelties are pivotal for the favourable performance of our approach.  相似文献   

15.
    
The plenoptic function is a ray‐based model for light that includes the colour spectrum as well as spatial, temporal and directional variation. Although digital light sensors have greatly evolved in the last years, one fundamental limitation remains: all standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons; in the process, all visual information is irreversibly lost, except for a two‐dimensional, spatially varying subset—the common photograph. In this state‐of‐the‐art report, we review approaches that optically encode the dimensions of the plenoptic function transcending those captured by traditional photography and reconstruct the recorded information computationally.  相似文献   

16.
We present a novel method to estimate an approximation of the reflectance characteristics of optically thick, homogeneous translucent materials using only a single photograph as input. First, we approximate the diffusion profile as a linear combination of piecewise constant functions, an approach that enables a linear system minimization and maximizes robustness in the presence of suboptimal input data inferred from the image. We then fit to a smoother monotonically decreasing model, ensuring continuity on its first derivative. We show the feasibility of our approach and validate it in controlled environments, comparing well against physical measurements from previous works. Next, we explore the performance of our method in uncontrolled scenarios, where neither lighting nor geometry are known. We show that these can be roughly approximated from the corresponding image by making two simple assumptions: that the object is lit by a distant light source and that it is globally convex, allowing us to capture the visual appearance of the photographed material. Compared with previous works, our technique offers an attractive balance between visual accuracy and ease of use, allowing its use in a wide range of scenarios including off‐the‐shelf, single images, thus extending the current repertoire of real‐world data acquisition techniques.  相似文献   

17.
This paper proposes a novel system that “rephotographs” a historical photograph with a collection of images. Rather than finding the accurate viewpoint of the historical photo, users only need to take a number of photographs around the target scene. We adopt the structure from motion technique to estimate the spatial relationship among these photographs, and construct a set of 3D point cloud. Based on the user‐specified correspondences between the projected 3D point cloud and historical photograph, the camera parameters of the historical photograph are estimated. We then combine forward and backward warping images to render the result. Finally, inpainting and content‐preserving warping are used to refine it, and the photograph at the same viewpoint of the historical one is produced by this photo collection.  相似文献   

18.
    
If spatial augmented reality is used in the design process of a car, then one of the most important issues is that the virtual content is projected with a very high visual quality onto the real object, because based on this projection design decisions are made. Especially, the visualised colours on the real object should not be distinguishable from corresponding real reference colours. In this paper, we introduce a new approach for the augmentation of real objects which is able to match the requirements of a design process. We present a new rendering method with ray tracing which increases the visual quality of the projection images in comparison to existing methods. The desired values of these images have further to be adjusted according to the material, the ambient light and the local orientation of the projector. For this purpose, we develop a physically based computation which exactly determines the corresponding projection intensities for these values by using three‐dimensional lookup tables at every projector pixel. Since not all of the desired values can be represented with an intensity of the projector, an adjustment has to be computed for these values. Therefore, we conduct a user study with design experts who work in the automotive industry and use the results to propose a new adjustment method for such values. Finally, we compare our methods to existing procedures and conclude which ones are suitable for the design process of a car.  相似文献   

19.
  总被引:3,自引:0,他引:3  
In this paper we present LazyBrush , a novel interactive tool for painting hand-made cartoon drawings and animations. Its key advantage is simplicity and flexibility. As opposed to previous custom tailored approaches [ SBv05 , QWH06 ] LazyBrush does not rely on style specific features such as homogenous regions or pattern continuity yet still offers comparable or even less manual effort for a broad class of drawing styles. In addition to this, it is not sensitive to imprecise placement of color strokes which makes painting less tedious and brings significant time savings in the context cartoon animation. LazyBrush originally stems from requirements analysis carried out with professional ink-and-paint illustrators who established a list of useful features for an ideal painting tool. We incorporate this list into an optimization framework leading to a variant of Potts energy with several interesting theoretical properties. We show how to minimize it efficiently and demonstrate its usefulness in various practical scenarios including the ink-and-paint production pipeline.  相似文献   

20.
Automatic Conversion of Mesh Animations into Skeleton-based Animations   总被引:1,自引:0,他引:1  
Recently, it has become increasingly popular to represent animations not by means of a classical skeleton‐based model, but in the form of deforming mesh sequences. The reason for this new trend is that novel mesh deformation methods as well as new surface based scene capture techniques offer a great level of flexibility during animation creation. Unfortunately, the resulting scene representation is less compact than skeletal ones and there is not yet a rich toolbox available which enables easy post‐processing and modification of mesh animations. To bridge this gap between the mesh‐based and the skeletal paradigm, we propose a new method that automatically extracts a plausible kinematic skeleton, skeletal motion parameters, as well as surface skinning weights from arbitrary mesh animations. By this means, deforming mesh sequences can be fully‐automatically transformed into fullyrigged virtual subjects. The original input can then be quickly rendered based on the new compact bone and skin representation, and it can be easily modified using the full repertoire of already existing animation tools.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号