首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Many visualization techniques use images containing meaningful color sequences. If such images are converted to grayscale, the sequence is often distorted, compromising the information in the image. We preserve the significance of a color sequence during decolorization by mapping the colors from a source image to a grid in the CIELAB color space. We then identify the most significant hues, and thin the corresponding cells of the grid to approximate a curve in the color space, eliminating outliers using a weighted Laplacian eigenmap. This curve is then mapped to a monotonic sequence of gray levels. The saturation values of the resulting image are combined with the original intensity channels to restore details such as text. Our approach can also be used to recolor images containing color sequences, for instance for viewers with color‐deficient vision, or to interpolate between two images that use the same geometry and color sequence to present different data.  相似文献   

2.
This paper proposes a new approach for color transfer between two images. Our method is unique in its consideration of the scene illumination and the constraint that the mapped image must be within the color gamut of the target image. Specifically, our approach first performs a white‐balance step on both images to remove color casts caused by different illuminations in the source and target image. We then align each image to share the same ‘white axis’ and perform a gradient preserving histogram matching technique along this axis to match the tone distribution between the two images. We show that this illuminant‐aware strategy gives a better result than directly working with the original source and target image's luminance channel as done by many previous methods. Afterwards, our method performs a full gamut‐based mapping technique rather than processing each channel separately. This guarantees that the colors of our transferred image lie within the target gamut. Our experimental results show that this combined illuminant‐aware and gamut‐based strategy produces more compelling results than previous methods. We detail our approach and demonstrate its effectiveness on a number of examples.  相似文献   

3.
In this paper we show how to use two‐colored pixels as a generic tool for image processing. We apply two‐colored pixels as a basic operator as well as a supporting data structure for several image processing applications. Traditionally, images are represented by a regular grid of square pixels with one constant color each. In the two‐colored pixel representation, we reduce the image resolution and replace blocks of N × N pixels by one square that is split by a (feature) line into two regions with constant colors. We show how the conversion of standard mono‐colored pixel images into two‐colored pixel images can be computed efficiently by applying a hierarchical algorithm along with a CUDA‐based implementation. Two‐colored pixels overcome some of the limitations that classical pixel representations have, and their feature lines provide minimal geometric information about the underlying image region that can be effectively exploited for a number of applications. We show how to use two‐colored pixels as an interactive brush tool, achieving realtime performance for image abstraction and non‐photorealistic filtering. Additionally, we propose a realtime solution for image retargeting, defined as a linear minimization problem on a regular or even adaptive two‐colored pixel image. The concept of two‐colored pixels can be easily extended to a video volume, and we demonstrate this for the example of video retargeting.  相似文献   

4.
Many works focus on multi‐spectral capture and analysis, but multi‐spectral display still remains a challenge. Most prior works on multi‐primary displays use ad‐hoc narrow band primaries that assure a larger color gamut, but cannot assure a good spectral reproduction. Content‐dependent spectral analysis is the only way to produce good spectral reproduction, but cannot be applied to general data sets. Wide primaries are better suited for assuring good spectral reproduction due to greater coverage of the spectral range, but have not been explored much. In this paper we explore the use of wide band primaries for accurate spectral reproduction for the first time and present the first content‐independent multi‐spectral display achieved using superimposed projections with modified wide band primaries. We present a content‐independent primary selection method that selects a small set of n primaries from a large set of m candidate primaries where m > n. Our primary selection method chooses primaries with complete coverage of the range of visible wavelength (for good spectral reproduction accuracy), low interdependency (to limit the primaries to a small number) and higher light throughput (for higher light efficiency). Once the primaries are selected, the input values of the different primary channels to generate a desired spectrum are computed using an optimization method that minimizes spectral mismatch while maximizing visual quality. We implement a real prototype of multi‐spectral display consisting of 9‐primaries using three modified conventional 3‐primary projectors, and compare it with a conventional display to demonstrate its superior performance. Experiments show our display is capable of providing large gamut assuring a good visual appearance while displaying any multi‐spectral images at a high spectral accuracy.  相似文献   

5.
Repeated scene elements are copious and ubiquitous in natural images. Cutout of those repeated elements usually involves tedious and laborious user interaction by previous image segmentation methods. In this paper, we present RepSnapping, a novel method oriented to cutout of repeated scene elements with much less user interaction. By exploring inherent similarity between repeated elements, a new optimization model is introduced to thread correlated elements in the segmentation procedure. The model proposed here enables efficient solution using max‐flow/min cut on an extended graph. Experiments indicate that RepSnapping facilitates cutout of repeated elements better than the state‐of‐the‐art interactive image segmentation and repetition detection methods.  相似文献   

6.
Edge‐preserving image filtering is a valuable tool for a variety of applications in image processing and computer vision. Motivated by a new simple but effective local Laplacian filter, we propose a scalable and efficient image filtering framework to extend this edge‐preserving image filter and construct an uniform implementation in O (N) time. The proposed framework is built upon a practical global‐to‐local strategy. The input image is first remapped globally by a series of tentative remapping functions to generate a virtual candidate image sequence (Virtual Image Pyramid Sequence, VIPS). This sequence is then recombined locally to a single output image by a flexible edge‐aware pixel‐level fusion rule. To avoid halo artifacts, both the output image and the virtual candidate image sequence are transformed into multi‐resolution pyramid representations. Four examples, single image dehazing, multi‐exposure fusion, fast edge‐preserving filtering and tone‐mapping, are presented as the concrete applications of the proposed framework. Experiments on filtering effect and computational efficiency indicate that the proposed framework is able to build a wide range of fast image filtering that yields visually compelling results.  相似文献   

7.
Color transfer is an image processing technique which can produce a new image combining one source image's contents with another image's color style. While being able to produce convincing results, however, Reinhard et al.'s pioneering work has two problems—mixing up of colors in different regions and the fidelity problem. Many local color transfer algorithms have been proposed to resolve the first problem, but the second problem was paid few attentions. In this paper, a novel color transfer algorithm is presented to resolve the fidelity problem of color transfer in terms of scene details and colors. It's well known that human visual system is more sensitive to local intensity differences than to intensity itself. We thus consider that preserving the color gradient is necessary for scene fidelity. We formulate the color transfer problem as an optimization problem and solve it in two steps—histogram matching and a gradient‐preserving optimization. Following the idea of the fidelity in terms of color and gradient, we also propose a metric for objectively evaluating the performance of example‐based color transfer algorithms. The experimental results show the validity and high fidelity of our algorithm and that it can be used to deal with local color transfer.  相似文献   

8.
It is popular to edit the appearance of images using strokes, owing to their ease of use and convenience of conveying the user's intention. However, propagating the user inputs to the rest of the images requires solving an enormous optimization problem, which is very time consuming, thus preventing its practical use. In this paper, a two‐step edit propagation scheme is proposed, first to solve edits on clusters of similar pixels and then to interpolate individual pixel edits from cluster edits. The key in our scheme is that we use efficient stroke sampling to compute the affinity between image pixels and strokes. Based on this, our clustering does not need to be stroke‐adaptive and thus the number of clusters is greatly reduced, resulting in a significant speedup. The proposed method has been tested on various images, and the results show that it is more than one order of magnitude faster than existing methods, while still achieving precise results compared with the ground truth. Moreover, its efficiency is not sensitive to the number of strokes, making it suitable for performing dense edits in practice.  相似文献   

9.
Providing multiple meanings in a single piece of art has always been intriguing to both artists and observers. We present Purkinje images, which have different interpretations depending on the luminance adaptation of the observer. Finding such images is an optimization that minimizes the sum of the distance to one reference image in photopic conditions and the distance to another reference image in scotopic conditions. To model the shift of image perception between day and night vision, we decompose the input images into a Laplacian pyramid. Distances under different observation conditions in this representation are independent between pyramid levels and pixel positions and become matrix multiplications. The optimal pixel colour can be found by inverting a small, per‐pixel linear system in real time on a GPU. Finally, two user studies analyze our results in terms of the recognition performance and fidelity with respect to the reference images.  相似文献   

10.
We present an alternative approach to create digital camouflage images which follows human's perception intuition and complies with the physical creation procedure of artists. Our method is based on a two‐scale decomposition scheme of the input images. We modify the large‐scale layer of the background image by considering structural importance based on energy optimization and the detail layer by controlling its spatial variation. A gradient correction is presented to prevent halo artifacts. Users can control the difficulty level of perceiving the camouflage effect through a few parameters. Our camouflage images are natural and have less long coherent edges in the hidden region. Experimental results show that our algorithm yields visually pleasing camouflage images.  相似文献   

11.
This paper presents a novel example‐based stippling technique that employs a simple and intuitive concept to convert a color image into a pointillism painting. Our method relies on analyzing and imitating the color distributions of Seurat's paintings to obtain a statistical color model. Then, this model can be easily combined with the modified multi‐class blue noise sampling to stylize an input image with characteristics of color composition in Seurat's paintings. The blue noise property of the output image also ensures that the color points are randomly located but remain spatially uniform. In our experiments, the multivariate goodness‐of‐fit tests were adopted to quantitatively analyze the results of the proposed and previous methods, further confirming that the color composition of our results are more similar to Seurat's painting style than that of previous approaches. Additionally, we also conducted a user study participated by artists to qualitatively evaluate the synthesized images of the proposed method.  相似文献   

12.
We present a new image completion method based on an additional large displacement view (LDV) of the same scene for faithfully repairing large missing regions on the target image in an automatic way. A coarse‐to‐fine distortion correction algorithm is proposed to minimize the perspective distortion in the corresponding parts for the common scene regions on the LDV image. First, under the assumption of a planar scene, the LDV image is warped according to a homography to generate the initial correction result. Second, the residual distortions in the common known scene regions are revealed by means of a mismatch detection mechanism and relaxed by energy optimization of overlap correspondences, with the expectations of color constancy and displacement field smoothness. The fundamental matrix for the two views is then computed based on the reliable correspondence set. Third, under the constraints of epipolar geometry, displacement field smoothness and color consistency of the neighboring pixels, the missing pixels are orderly restored according to a specially defined repairing priority function. We finally eliminate the ghost effect between the repaired region and its surroundings by Poisson image blending. Experimental results demonstrate that our method outperforms recent state‐of‐the‐art image completion methods for repairing large missing area with complex structure information.  相似文献   

13.
Smart deformation and warping tools play an important part in modern day geometric modeling systems. They allow existing content to be stretched or scaled while preserving visually salient information. To date, these techniques have primarily focused on preserving local shape details, not taking into account important global structures such as symmetry and line features. In this work we present a novel framework that can be used to preserve the global structure in images and vector art. Such structures include symmetries and the spatial relations in shapes and line features in an image. Central to our method is a new formulation of preserving structure as an optimization problem. We use novel optimization strategies to achieve the interactive performance required by modern day modeling applications. We demonstrate the effectiveness of our framework by performing structure preservation deformation of images and complex vector art at interactive rates.  相似文献   

14.
Recent spatially varying reflectance (svBRDF) printing systems can reproduce an input document as a combination of matte, glossy and metallic inks. Due to the limited number of inks, this reproduction process incurs some distortion. In this work, we present an svBRDF gamut mapping algorithm that minimizes distortions in the angular and spatial domains. To preserve a material's perceived variation with lighting and view, we introduce an improved BRDF similarity metric that builds on both experimental results on reflectance perception and on the statistics of natural lighting environments. Our experiments show better preservation of object color and highlights, as validated quantitatively as well as through a perceptual study. As for the spatial domain, we show how to adapt traditional color gamut mapping methods to svBRDFs. Our solution takes into account the contrast between regions, achieving better preservation of textures and edges.  相似文献   

15.
The fusion and combination of images from multiple modalities is important in many applications. Typically, this process consists of the alignment of the images and the combination of the complementary information. In this work, we focused on the former part and propose a multimodal image distance measure based on the commutativity of graph Laplacians. The eigenvectors of the image graph Laplacian, and thus the graph Laplacian itself, capture the intrinsic structure of the image’s modality. Using Laplacian commutativity as a criterion of image structure preservation, we adapt the problem of finding the closest commuting operators to multimodal image registration. Hence, by using the relation between simultaneous diagonalization and commutativity of matrices, we compare multimodal image structures by means of the commutativity of their graph Laplacians. In this way, we avoid spectrum reordering schemes or additional manifold alignment steps which are necessary to ensure the comparability of eigenspaces across modalities. We show on synthetic and real datasets that this approach is applicable to dense rigid and non-rigid image registration. Results demonstrated that the proposed measure is able to deal with very challenging multimodal datasets and compares favorably to normalized mutual information, a de facto similarity measure for multimodal image registration.  相似文献   

16.
Material engineers use interrupted in situ tensile testing to investigate the damage mechanisms in composite materials. For each subsequent scan, the load is incrementally increased until the specimen is completely fractured. During the interrupted in situ testing of glass fiber reinforced polymers (GFRPs) defects of four types are expected to appear: matrix fracture, fiber/matrix debonding, fiber pull‐out, and fiber fracture. There is a growing demand for the detection and analysis of these defects among the material engineers. In this paper, we present a novel workflow for the detection, classification, and visual analysis of defects in GFRPs using interrupted in situ tensile tests in combination with X‐ray Computed Tomography. The workflow is based on the automatic extraction of defects and fibers. We introduce the automatic Defect Classifier assigning the most suitable type to each defect based on its geometrical features. We present a visual analysis system that integrates four visualization methods: 1) the Defect Viewer highlights defects with visually encoded type in the context of the original CT image, 2) the Defect Density Maps provide an overview of the defect distributions according to type in 2D and 3D, 3) the Final Fracture Surface estimates the material fracture's location and displays it as a 3D surface, 4) the 3D Magic Lens enables interactive exploration by combining detailed visualizations in the region of interest with overview visualizations as context. In collaboration with material engineers, we evaluate our solution and demonstrate its practical applicability.  相似文献   

17.
18.
Color images often have to be converted to grayscale for reproduction, artistic purposes, or for subsequent processing. Methods performing the conversion of color images to grayscale aim to retain as much information about the original color image as possible, while simultaneously producing perceptually plausible grayscale results. Recently, many methods of conversion have been proposed, but their performance has not yet been assessed. Therefore, the strengths and weaknesses of color‐to‐grayscale conversions are not known. In this paper, we present the results of two subjective experiments in which a total of 24 color images were converted to grayscale using seven state‐of‐the‐art conversions and evaluated by 119 human subjects using a paired comparison paradigm. We surveyed nearly 20000 human responses and used them to evaluate the accuracy and preference of the color‐to‐grayscale conversions. To the best of our knowledge, the study presented in this paper is the first perceptual evaluation of color‐to‐grayscale conversions. Besides exposing the strengths and weaknesses of the researched methods, the aim of the study is to attain a deeper understanding of the examined field, which can accelerate the progress of color‐to‐grayscale conversion.  相似文献   

19.
Recent color transfer methods use local information to learn the transformation from a source to an exemplar image, and then transfer this appearance change to a target image. These solutions achieve very successful results for general mood changes, e.g., changing the appearance of an image from “sunny” to “overcast”. However, such methods have a hard time creating new image content, such as leaves on a bare tree. Texture transfer, on the other hand, can synthesize such content but tends to destroy image structure. We propose the first algorithm that unifies color and texture transfer, outperforming both by leveraging their respective strengths. A key novelty in our approach resides in teasing apart appearance changes that can be modeled simply as changes in color versus those that require new image content to be generated. Our method starts with an analysis phase which evaluates the success of color transfer by comparing the exemplar with the source. This analysis then drives a selective, iterative texture transfer algorithm that simultaneously predicts the success of color transfer on the target and synthesizes new content where needed. We demonstrate our unified algorithm by transferring large temporal changes between photographs, such as change of season – e.g., leaves on bare trees or piles of snow on a street – and flooding.  相似文献   

20.
Directors employ a process called “color grading” to add color styles to feature films. Color grading is used for a number of reasons, such as accentuating a certain emotion or expressing the signature look of a director. We collect a database of feature film clips and label them with tags such as director, emotion, and genre. We then learn a model that maps from the low‐level color and tone properties of film clips to the associated labels. This model allows us to examine a number of common hypotheses on the use of color to achieve goals, such as specific emotions. We also describe a method to apply our learned color styles to new images and videos. Along with our analysis of color grading techniques, we demonstrate a number of images and videos that are automatically filtered to resemble certain film styles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号