首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Maintaining consistent styles across glyphs is an arduous task in typeface design. In this work we introduce FlexyFont, a flexible tool for synthesizing a complete typeface that has a consistent style with a given small set of glyphs. Motivated by a key fact that typeface designers often maintain a library of glyph parts to achieve a consistent typeface, we intend to learn part consistency between glyphs of different characters across typefaces. We take a part assembling approach by firstly decomposing the given glyphs into semantic parts and then assembling them according to learned sets of transferring rules to reconstruct the missing glyphs. To maintain style consistency, we represent the style of a font as a vector of pairwise part similarities. By learning a distribution over these feature vectors, we are able to predict the style of a novel typeface given only a few examples. We utilize a popular machine learning method as well as retrieval‐based methods to quantitatively assess the performance of our feature vector, resulting in favorable results. We also present an intuitive interface that allows users to interactively create novel typefaces with ease. The synthesized fonts can be directly used in real‐world design.  相似文献   

2.
This paper investigates a new approach for color transfer. Rather than transferring color from one image to another globally, we propose a system with a stroke‐based user interface to provide a direct indication mechanism. We further present a multiple local color transfer method. Through our system the user can easily enhance a defect (source) photo by referring to some other good quality (target) images by simply drawing some strokes. Then, the system will perform the multiple local color transfer automatically. The system consists of two major steps. First, the user draws some strokes on the source and target images to indicate corresponding regions and also the regions he or she wants to preserve. The regions to be preserved which will be masked out based on an improved graph cuts algorithm. Second, a multiple local color transfer method is presented to transfer the color from the target image(s) to the source image through gradient‐guided pixel‐wise color transfer functions. Finally, the defect (source) image can be enhanced seamlessly by multiple local color transfer based on some good quality (target) examples through an interactive and intuitive stroke‐based user interface.  相似文献   

3.
Content‐aware image retargeting is a technique that can flexibly display images with different aspect ratios and simultaneously preserve salient regions in images. Recently many image retargeting techniques have been proposed. To compare image quality by different retargeting methods fast and reliably, an objective metric simulating the human vision system (HVS) is presented in this paper. Different from traditional objective assessment methods that work in bottom‐up manner (i.e., assembling pixel‐level features in a local‐to‐global way), in this paper we propose to use a reverse order (top‐down manner) that organizes image features from global to local viewpoints, leading to a new objective assessment metric for retargeted images. A scale‐space matching method is designed to facilitate extraction of global geometric structures from retargeted images. By traversing the scale space from coarse to fine levels, local pixel correspondence is also established. The objective assessment metric is then based on both global geometric structures and local pixel correspondence. To evaluate color images, CIE L*a*b* color space is utilized. Experimental results are obtained to measure the performance of objective assessments with the proposed metric. The results show good consistency between the proposed objective metric and subjective assessment by human observers.  相似文献   

4.
In this paper, we present a general variational method for image fusion. In particular, we combine different images of the same subject to a single composite that offers optimal exposedness, saturation and local contrast. Previous research approaches this task by first pre‐computing application‐specific weights based on the input, and then combining these weights with the images to the final composite later on. In contrast, we design our model assumptions directly on the fusion result. To this end, we formulate the output image as a convex combination of the input and incorporate concepts from perceptually inspired contrast enhancement such as a local and non‐linear response. This output‐driven approach is the key to the versatility of our general image fusion model. In this regard, we demonstrate the performance of our fusion scheme with several applications such as exposure fusion, multispectral imaging and decolourization. For all application domains, we conduct thorough validations that illustrate the improvements compared to state‐of‐the‐art approaches that are tailored to the individual tasks.  相似文献   

5.
In the past years sophisticated automatic segmentation algorithms for various medical image segmentation problems have been developed. However, there are always cases where automatic algorithms fail to provide an acceptable segmentation. In these cases the user needs efficient segmentation editing tools, a problem which has not received much attention in research. We give a comprehensive overview on segmentation editing for three‐dimensional (3D) medical images. For segmentation editing in two‐dimensional (2D) images, we discuss a sketch‐based approach where the user modifies the segmentation in the contour domain. Based on this 2D interface, we present an image‐based as well as an image‐independent method for intuitive and efficient segmentation editing in 3D in the context of tumour segmentation in computed tomography (CT). Our editing tools have been evaluated on a database containing 1226 representative liver metastases, lung nodules and lymph nodes of different shape, size and image quality. In addition, we have performed a qualitative evaluation with radiologists and technical experts, proving the efficiency of our tools.  相似文献   

6.
In this paper, we propose a novel approach for automatic generation of visualizations from domain‐specific data available on the web. We describe a general system pipeline that combines ontology mapping and probabilistic reasoning techniques. With this approach, a web page is first mapped to a Domain Ontology, which stores the semantics of a specific subject domain (e.g., music charts). The Domain Ontology is then mapped to one or more Visual Representation Ontologies, each of which captures the semantics of a visualization style (e.g., tree maps). To enable the mapping between these two ontologies, we establish a Semantic Bridging Ontology, which specifies the appropriateness of each semantic bridge. Finally each Visual Representation Ontology is mapped to a visualization using an external visualization toolkit. Using this approach, we have developed a prototype software tool, SemViz, as a realisation of this approach. By interfacing its Visual Representation Ontologies with public domain software such as ILOG Discovery and Prefuse, SemViz is able to generate appropriate visualizations automatically from a large collection of popular web pages for music charts without prior knowledge of these web pages.  相似文献   

7.
Represented in a Morphable Model, 3D faces follow curved trajectories in face space as they age. We present a novel algorithm that computes the individual aging trajectories for given faces, based on a non-linear function that assigns an age to each face vector. This function is learned from a database of 3D scans of teenagers and adults using support vector regression. To apply the aging prediction to images of faces, we reconstruct a 3D model from the input image, apply the aging transformation on both shape and texture, and then render the face back into the same image or into images of other individuals at the appropriate ages, for example images of older children. Among other applications, our system can help to find missing children.  相似文献   

8.
Image completion techniques aim to complete selected regions of an image in a natural looking manner with little or no user interaction. Video Completion, the space–time equivalent of the image completion problem, inherits and extends both the difficulties and the solutions of the original 2D problem, but also imposes new ones—mainly temporal coherency and space complexity (videos contain significantly more information than images). Data‐driven approaches to completion have been established as a favoured choice, especially when large regions have to be filled. In this survey, we present the current state of the art in data‐driven video completion techniques. For unacquainted researchers, we aim to provide a broad yet easy to follow introduction to the subject (including an extensive review of the image completion foundations) and early guidance to the challenges ahead. For a versed reader, we offer a comprehensive review of the contemporary techniques, sectioned out by their approaches to key aspects of the problem.  相似文献   

9.
Intravascular imaging provides new insights into the condition of vessel walls. This is crucial for cerebrovascular diseases including stroke and cerebral aneurysms, where it may present an important factor for indication of therapy. In this work, we provide new information of cerebral artery walls by combining ex vivo optical coherence tomography (OCT) imaging with histology data sets. To overcome the obstacles of deflated and collapsed vessels due to the missing blood pressure, the lack of co‐alignment as well as the geometrical shape deformations due to catheter probing, we developed the new image processing method virtual inflation. We locally sample the vessel wall thickness based on the (deflated) vessel lumen border instead of the vessel's centerline. Our method is embedded in a multi‐view framework where correspondences between OCT and histology can be highlighted via brushing and linking yielding OCT signal characteristics of the cerebral artery wall and its pathologies. Finally, we enrich the data views with a hierarchical clustering representation which is linked via virtual inflation and further supports the deduction of vessel wall pathologies.  相似文献   

10.
In this paper we show how to use two‐colored pixels as a generic tool for image processing. We apply two‐colored pixels as a basic operator as well as a supporting data structure for several image processing applications. Traditionally, images are represented by a regular grid of square pixels with one constant color each. In the two‐colored pixel representation, we reduce the image resolution and replace blocks of N × N pixels by one square that is split by a (feature) line into two regions with constant colors. We show how the conversion of standard mono‐colored pixel images into two‐colored pixel images can be computed efficiently by applying a hierarchical algorithm along with a CUDA‐based implementation. Two‐colored pixels overcome some of the limitations that classical pixel representations have, and their feature lines provide minimal geometric information about the underlying image region that can be effectively exploited for a number of applications. We show how to use two‐colored pixels as an interactive brush tool, achieving realtime performance for image abstraction and non‐photorealistic filtering. Additionally, we propose a realtime solution for image retargeting, defined as a linear minimization problem on a regular or even adaptive two‐colored pixel image. The concept of two‐colored pixels can be easily extended to a video volume, and we demonstrate this for the example of video retargeting.  相似文献   

11.
Interactive computation of global illumination is a major challenge in current computer graphics research. Global illumination heavily affects the visual quality of generated images. It is therefore a key attribute for the perception of photo‐realistic images. Path tracing is able to simulate the physical behaviour of light using Monte Carlo techniques. However, the computational burden of this technique prohibits interactive rendering times on standard commodity hardware in high‐quality. Trying to solve the Monte Carlo integration with fewer samples results in characteristic noisy images. Global illumination filtering methods take advantage of the fact that the integral for neighbouring pixels may be very similar. Averaging samples of similar characteristics in screen‐space may approximate the correct integral, but may result in visible outliers. In this paper, we present a novel path tracing pipeline based on an edge‐aware filtering method for the indirect illumination which produces visually more pleasing results without noticeable outliers. The key idea is not to filter the noisy path traced images but to use it as a guidance to filter a second image composed from characteristic scene attributes that do not contain noise by default. We show that our approach better approximates the Monte Carlo integral compared to previous methods. Since the computation is carried out completely in screen‐space it is therefore applicable to fully dynamic scenes, arbitrary lighting and allows for high‐quality path tracing at interactive frame rates on commodity hardware.  相似文献   

12.
We present a new approach aimed at understanding the structure of connections in edge‐bundling layouts. We combine the advantages of edge bundles with a bundle‐centric simplified visual representation of a graph's structure. For this, we first compute a hierarchical edge clustering of a given graph layout which groups similar edges together. Next, we render clusters at a user‐selected level of detail using a new image‐based technique that combines distance‐based splatting and shape skeletonization. The overall result displays a given graph as a small set of overlapping shaded edge bundles. Luminance, saturation, hue, and shading encode edge density, edge types, and edge similarity. Finally, we add brushing and a new type of semantic lens to help navigation where local structures overlap. We illustrate the proposed method on several real‐world graph datasets.  相似文献   

13.
We propose a system to restrict the manipulation of shape and appearance in an image to a valid subspace which we learn from a collection of exemplar images. To this end, we automatically co‐align a collection of images and learn a subspace model of shape and appearance using principal components. As finding perfect image correspondences for general images is not feasible, we build an approximate partial alignment and improve bad alignments leveraging other, more successful alignments. Our system allows the user to change appearance and shape in real‐time and the result is “projected” onto the subspace of meaningful changes. The change in appearance and shape can either be locked or performed independently. Additional applications include suggestion of alternative shapes or appearance.  相似文献   

14.
We present a novel method to estimate an approximation of the reflectance characteristics of optically thick, homogeneous translucent materials using only a single photograph as input. First, we approximate the diffusion profile as a linear combination of piecewise constant functions, an approach that enables a linear system minimization and maximizes robustness in the presence of suboptimal input data inferred from the image. We then fit to a smoother monotonically decreasing model, ensuring continuity on its first derivative. We show the feasibility of our approach and validate it in controlled environments, comparing well against physical measurements from previous works. Next, we explore the performance of our method in uncontrolled scenarios, where neither lighting nor geometry are known. We show that these can be roughly approximated from the corresponding image by making two simple assumptions: that the object is lit by a distant light source and that it is globally convex, allowing us to capture the visual appearance of the photographed material. Compared with previous works, our technique offers an attractive balance between visual accuracy and ease of use, allowing its use in a wide range of scenarios including off‐the‐shelf, single images, thus extending the current repertoire of real‐world data acquisition techniques.  相似文献   

15.
We introduce a novel method for enabling stereoscopic viewing of a scene from a single pre‐segmented image. Rather than attempting full 3D reconstruction or accurate depth map recovery, we hallucinate a rough approximation of the scene's 3D model using a number of simple depth and occlusion cues and shape priors. We begin by depth‐sorting the segments, each of which is assumed to represent a separate object in the scene, resulting in a collection of depth layers. The shapes and textures of the partially occluded segments are then completed using symmetry and convexity priors. Next, each completed segment is converted to a union of generalized cylinders yielding a rough 3D model for each object. Finally, the object depths are refined using an iterative ground fitting process. The hallucinated 3D model of the scene may then be used to generate a stereoscopic image pair, or to produce images from novel viewpoints within a small neighborhood of the original view. Despite the simplicity of our approach, we show that it compares favorably with state‐of‐the‐art depth ordering methods. A user study was conducted showing that our method produces more convincing stereoscopic images than existing semi‐interactive and automatic single image depth recovery methods.  相似文献   

16.
Word clouds are proliferating on the Internet and have received much attention in visual analytics. Although word clouds can help users understand the major content of a document collection quickly, their ability to visually compare documents is limited. This paper introduces a new method to create semantic‐preserving word clouds by leveraging tailored seam carving, a well‐established content‐aware image resizing operator. The method can optimize a word cloud layout by removing a left‐to‐right or top‐to‐bottom seam iteratively and gracefully from the layout. Each seam is a connected path of low energy regions determined by a Gaussian‐based energy function. With seam carving, we can pack the word cloud compactly and effectively, while preserving its overall semantic structure. Furthermore, we design a set of interactive visualization techniques for the created word clouds to facilitate visual text analysis and comparison. Case studies are conducted to demonstrate the effectiveness and usefulness of our techniques.  相似文献   

17.
Sharp edges are important shape features and their extraction has been extensively studied both on point clouds and surfaces. We consider the problem of extracting sharp edges from a sparse set of colour‐and‐depth (RGB‐D) images. The noise‐ridden depth measurements are challenging for existing feature extraction methods that work solely in the geometric domain (e.g. points or meshes). By utilizing both colour and depth information, we propose a novel feature extraction method that produces much cleaner and more coherent feature lines. We make two technical contributions. First, we show that intensity edges can augment the depth map to improve normal estimation and feature localization from a single RGB‐D image. Second, we designed a novel algorithm for consolidating feature points obtained from multiple RGB‐D images. By utilizing normals and ridge/valley types associated with the feature points, our algorithm is effective in suppressing noise without smearing nearby features.  相似文献   

18.
Color images often have to be converted to grayscale for reproduction, artistic purposes, or for subsequent processing. Methods performing the conversion of color images to grayscale aim to retain as much information about the original color image as possible, while simultaneously producing perceptually plausible grayscale results. Recently, many methods of conversion have been proposed, but their performance has not yet been assessed. Therefore, the strengths and weaknesses of color‐to‐grayscale conversions are not known. In this paper, we present the results of two subjective experiments in which a total of 24 color images were converted to grayscale using seven state‐of‐the‐art conversions and evaluated by 119 human subjects using a paired comparison paradigm. We surveyed nearly 20000 human responses and used them to evaluate the accuracy and preference of the color‐to‐grayscale conversions. To the best of our knowledge, the study presented in this paper is the first perceptual evaluation of color‐to‐grayscale conversions. Besides exposing the strengths and weaknesses of the researched methods, the aim of the study is to attain a deeper understanding of the examined field, which can accelerate the progress of color‐to‐grayscale conversion.  相似文献   

19.
Creating and animating subject‐specific anatomical models is traditionally a difficult process involving medical image segmentation, geometric corrections and the manual definition of kinematic parameters. In this paper, we introduce a novel template morphing algorithm that facilitates three‐dimensional modelling and parameterization of skeletons. Target data can be either medical images or surfaces of the whole skeleton. We incorporate prior knowledge about bone shape, the feasible skeleton pose and the morphological variability in the population. This allows for noise reduction, bone separation and the transfer, from the template, of anatomical and kinematical information not present in the input data. Our approach treats both local and global deformations in successive regularization steps: smooth elastic deformations are represented by an as‐rigid‐as‐possible displacement field between the reference and current configuration of the template, whereas global and discontinuous displacements are estimated through a projection onto a statistical shape model and a new joint pose optimization scheme with joint limits.  相似文献   

20.
Mappings between color spaces are ubiquitous in image processing problems such as gamut mapping, decolorization, and image optimization for color‐blind people. Simple color transformations often result in information loss and ambiguities, and one wishes to find an image‐specific transformation that would preserve as much as possible the structure of the original image in the target color space. In this paper, we propose Laplacian colormaps, a generic framework for structure‐preserving color transformations between images. We use the image Laplacian to capture the structural information, and show that if the color transformation between two images preserves the structure, the respective Laplacians have similar eigenvectors, or in other words, are approximately jointly diagonalizable. Employing the relation between joint diagonalizability and commutativity of matrices, we use Laplacians commutativity as a criterion of color mapping quality and minimize it w.r.t. the parameters of a color transformation to achieve optimal structure preservation. We show numerous applications of our approach, including color‐to‐gray conversion, gamut mapping, multispectral image fusion, and image optimization for color deficient viewers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号