首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents an algorithm with the purpose of improving upon the already successful constrained Delaunay triangulation (CDT) skeletonisation technique. Using such a triangulation to construct a skeleton has proven very effective, that can sometimes, however, produce triangles that do not represent the true nature of the underlying shape. The contour pixels chosen for triangulation are of significant importance, as they determine the triangle edges that define the skeleton. The algorithm described in this paper deals with this problem by inserting new triangulation points in strategic locations in end, normal and junction triangles. Results show that the skeletons produced by this algorithm are accurate, robust against noise and, above all, comply much better with a human's perception of the image than the original triangulation method.  相似文献   

2.
In this paper, Delaunay triangulation is applied for the extraction of text areas in a document image. By representing the location of connected components in a document image with their centroids, the page structure is described as a set of points in two-dimensional space. When imposing Delaunay triangulation on these points, the text regions in the Delaunay triangulation will have distinguishing triangular features from image and drawing regions. For analysis, the Delaunay triangles are divided into four classes. The study reveals that specific triangles in text areas can be clustered together and identified as text body. Using this method, text regions in a document image containing fragments can also be recognized accurately. Experiments show the method is also very efficient.  相似文献   

3.
Although many image processing applications are ideally suited for parallel implementation, most researchers in imaging do not benefit from high‐performance computing on a daily basis. Essentially, this is due to the fact that no parallelization tools exist that truly match the image processing researcher's frame of reference. As it is unrealistic to expect imaging researchers to become experts in parallel computing, tools must be provided to allow them to develop high‐performance applications in a highly familiar manner. In an attempt to provide such a tool, we have designed a software architecture that allows transparent (i.e. sequential) implementation of data parallel imaging applications for execution on homogeneous distributed memory MIMD‐style multicomputers. This paper presents an extensive overview of the design rationale behind the software architecture, and gives an assessment of the architecture's effectiveness in providing significant performance gains. In particular, we describe the implementation and automatic parallelization of three well‐known example applications that contain many fundamental imaging operations: (1) template matching; (2) multi‐baseline stereo vision; and (3) line detection. Based on experimental results we conclude that our software architecture constitutes a powerful and user‐friendly tool for obtaining high performance in many important image processing research areas. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
We present a new, high‐quality compositing pipeline and navigation approach for variable resolution imagery. The motivation of this work is to explore the use of variable resolution images as a quick and accessible alternative to traditional gigapixel mosaics. Instead of the common tedious acquisition of many images using specialized hardware, variable resolution images can achieve similarly deep zooms as large mosaics, but with only a handful of images. For this approach to be a viable alternative, the state‐of‐the‐art in variable resolution compositing needs to be improved to match the high‐quality approaches commonly used in mosaic compositing. To this end, we provide a novel, variable resolution mosaic seam calculation and gradient domain color correction. This approach includes a new priority order graph cuts computation along with a practical data structure to keep memory overhead low. In addition, navigating variable resolution images is challenging, especially at the zoom factors targeted in this work. To address this challenge, we introduce a new image interaction for variable resolution imagery: a pan that automatically, and smoothly, hugs available resolution. Finally, we provide several real‐world examples of our approach producing high‐quality variable resolution mosaics with deep zooms typically associated with gigapixel photography.  相似文献   

5.
提出了一种基于三角剖分的形状检索改进算法.算法的基本思想是:先对图像进行边界跟踪和角点检测;然后寻找初始角点在边界跟踪中的对应点,并对找到对应点的角点按对应点在边界跟踪中的顺序进行排序;再对排序后的角点进行德洛内三角剖分,得到能表示目标真实形状的三角形序列;最后计算三角形序列的角度直方图作为形状特征进行相似性匹配.实验结果表明,该算法有较高的效率和检索精度.  相似文献   

6.
We present an algorithm for obtaining a triangulation of multiple, non‐planar 3D polygons. The output minimizes additive weights, such as the total triangle areas or the total dihedral angles between adjacent triangles. Our algorithm generalizes a classical method for optimally triangulating a single polygon. The key novelty is a mechanism for avoiding non‐manifold outputs for two and more input polygons without compromising optimality. For better performance on real‐world data, we also propose an approximate solution by feeding the algorithm with a reduced set of triangles. In particular, we demonstrate experimentally that the triangles in the Delaunay tetrahedralization of the polygon vertices offer a reasonable trade off between performance and optimality.  相似文献   

7.
8.
We present a method to automatically decompose a light field into its intrinsic shading and albedo components. Contrary to previous work targeted to two‐dimensional (2D) single images and videos, a light field is a 4D structure that captures non‐integrated incoming radiance over a discrete angular domain. This higher dimensionality of the problem renders previous state‐of‐the‐art algorithms impractical either due to their cost of processing a single 2D slice, or their inability to enforce proper coherence in additional dimensions. We propose a new decomposition algorithm that jointly optimizes the whole light field data for proper angular coherence. For efficiency, we extend Retinex theory, working on the gradient domain, where new albedo and occlusion terms are introduced. Results show that our method provides 4D intrinsic decompositions difficult to achieve with previous state‐of‐the‐art algorithms. We further provide a comprehensive analysis and comparisons with existing intrinsic image/video decomposition methods on light field images.  相似文献   

9.
10.
The desire for the physical generation of computer art has seen a significant body of research that has resulted in sophisticated robots and painting machines, together with specialized algorithms mimicking particular artistic techniques. The resulting setups are often expensive and complex, making them unavailable for recreational and hobbyist use. In recent years, however, a new class of affordable low‐cost plotters and cutting machines has reached the market. In this paper, we present a novel system for the physical generation of line and cut‐out art based on digital images, targeted at such off‐the‐shelf devices. Our approach uses a meta‐optimization process to generate results that represent the tonal content of a digital image while conforming to the physical and mechanical constraints of home‐use devices. By flexibly combining basic sets of positional and shape encodings, we are able to recreate a wide range of artistic styles. Furthermore, our system optimizes the output in terms of visual perception based on the desired viewing distance, while remaining scalable with respect to the medium size.  相似文献   

11.
Palette‐based image decomposition has attracted increasing attention in recent years. A specific class of approaches have been proposed basing on the RGB‐space geometry, which manage to construct convex hulls whose vertices act as palette colors. However, such palettes do not guarantee to have the representative colors which actually appear in the image, thus making it less intuitive and less predictable when editing palette colors to perform recoloring. Hence, we proposed an improved geometric approach to address this issue. We use a polyhedron, but not necessarily a convex hull, in the RGB space to represent the color palette. We then formulate the task of palette extraction as an optimization problem which could be solved in a few seconds. Our palette has a higher degree of representativeness and maintains a relatively similar level of accuracy compared with previous methods. For layer decomposition, we compute layer opacities via simple mean value coordinates, which could achieve instant feedbacks without precomputations. We have demonstrated our method for image recoloring on a variety of examples. In comparison with state‐of‐the‐art works, our approach is generally more intuitive and efficient with fewer artifacts.  相似文献   

12.
Nowadays, deep neural networks (DNNs) for image processing are becoming more complex; thus, reducing computational cost is increasingly important. This study highlights the construction of a DNN for real‐time image processing, training various image processing operators efficiently through multitask learning. For real‐time image processing, the proposed algorithm takes a joint upsampling approach through bilateral guided upsampling. For multitask learning, the overall network is based on an encoder‐decoder architecture, which consists of encoding, processing, and decoding components, in which the encoding and decoding components are shared by all the image processing operators. In the processing component, a semantic guidance map, which contains processing information for each image processing operator, is estimated using simple linear shifts of the shared deep features. Through these components, the proposed algorithm requires an increase of only 5% in the number of parameters to add another image processing operator and achieves faster and higher performance than that of deep‐learning‐based joint upsampling methods in local image processing as well as global image processing.  相似文献   

13.
As an art form between drawing and sculpture, relief has been widely used in a variety of media for signs, narratives, decorations and other purposes. Traditional relief creation relies on both professional skills and artistic expertise, which is extremely time‐consuming. Recently, automatic or semi‐automatic relief modelling from a 3D object or a 2D image has been a subject of interest in computer graphics. Various methods have been proposed to generate reliefs with few user interactions or minor human efforts, while preserving or enhancing the appearance of the input. This survey provides a comprehensive review of the advances in computer‐assisted relief modelling during the past decade. First, we provide an overview of relief types and their art characteristics. Then, we introduce the key techniques of object‐space methods and image‐space methods respectively. Advantages and limitations of each category are discussed in details. We conclude the report by discussing directions for possible future research.  相似文献   

14.
Smoothing noises while preserving strong edges in images is an important problem in image processing. Image smoothing filters can be either explicit (based on local weighted average) or implicit (based on global optimization). Implicit methods are usually time‐consuming and cannot be applied to joint image filtering tasks, i.e., leveraging the structural information of a guidance image to filter a target image. Previous deep learning based image smoothing filters are all implicit and unavailable for joint filtering. In this paper, we propose to learn explicit guidance feature maps as well as offset maps from the guidance image and smoothing parameter that can be utilized to smooth the input itself or to filter images in other target domains. We design a deep convolutional neural network consisting of a fully‐convolution block for guidance and offset maps extraction together with a stacked spatially varying deformable convolution block for joint image filtering. Our models can approximate several representative image smoothing filters with high accuracy comparable to state‐of‐the‐art methods, and serve as general tools for other joint image filtering tasks, such as color interpolation, depth map upsampling, saliency map upsampling, flash/non‐flash image denoising and RGB/NIR image denoising.  相似文献   

15.
The accuracy of a non-pixel-based skeletonization method is largely dependent on the contour information chosen as input. When using a Constrained Delaunay Triangulation to construct an object's skeleton, a number of contour pixels must be chosen as a basis for triangulation. This paper presents a new method of selecting these contour pixels. A new method for measuring skeletonization error is proposed, which quantifies the deviation of a skeleton segment from the true medial axis of a stroke in an image. The goal of the proposed algorithm is to reduce this error to an acceptable level, whilst retaining the superior efficiencies of previous non-pixel-based techniques. Experimental results show that the proposed method is adept at following the medial axis of an image, and is capable of producing a skeleton that is confirmed by a human's perception of the image. It is also computationally efficient and robust against noise.  相似文献   

16.
We propose a framework of graph-based tools for the segmentation of microscopic cellular images. This framework is based on an object oriented analysis of imaging problems in pathology. Our graph tools rely on a general formulation of discrete functional regularization on weighted graphs of arbitrary topology. It leads to a set of useful tools which can be combined together to address various image segmentation problems in pathology. To provide fast image segmentation algorithms, we also propose an image simplification based on graphs as a pre processing step. The abilities of this set of image processing discrete tools are illustrated through automatic and interactive segmentation schemes for color cytological and histological images segmentation problems.  相似文献   

17.
Cross‐browser compatibility testing is concerned with identifying perceptible differences in the way a Web page is rendered across different browsers or configurations thereof. Existing automated cross‐browser compatibility testing methods are generally based on document object model (DOM) analysis, or in some cases, a combination of DOM analysis with screenshot capture and image processing. DOM analysis, however, may miss incompatibilities that arise not during DOM construction but rather during rendering. Conversely, DOM analysis produces false alarms because different DOMs may lead to identical or sufficiently similar renderings. This paper presents a novel method for cross‐browser testing based purely on image processing. The method relies on image segmentation to extract ‘regions’ from a Web page and computer vision techniques to extract a set of characteristic features from each region. Regions extracted from a screenshot taken on a baseline browser are compared against regions extracted from the browser under test based on characteristic features. A machine learning classifier is used to determine if differences between two matched regions should be classified as an incompatibility. An evaluation involving 140 pages shows that the proposed method achieves an F‐score exceeding 90%, outperforming a state‐of‐the‐art cross‐browser testing tool based on DOM analysis. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
提出一种有效的隐式曲面三角网格化算法。从隐式曲面上的一个种子点开始,生成网格的边界作为扩张多边形,且该多边形最小角对应的顶点为扩张点,计算从扩张点处欲生成的三角网格,为了防止新生成的三角网格和已经存在的三角网格重叠,要进行冲突检测。在隐式曲面三角网格化的过程中,扩张多边形是不断变化的,需要重复上述步骤,直至没有扩张多边形时结束。该算法分别应用于解析隐式曲面和变分隐式曲面的三角网格化。实验结果表明,该算法不需要重新网格化的步骤,生成的三角网格具有较高的质量,且三角网格随曲率适应性变化,因此说明了该算法的有效性。  相似文献   

19.
Superpixel segmentation is a popular image pre‐processing technique in many computer vision applications. In this paper, we present a novel superpixel generation algorithm by agglomerative clustering with quadratic error minimization. We use a quadratic error metric (QEM) to measure the difference of spatial compactness and colour homogeneity between superpixels. Based on the quadratic function, we propose a bottom‐up greedy clustering algorithm to obtain higher quality superpixel segmentation. There are two steps in our algorithm: merging and swapping. First, we calculate the merging cost of two superpixels and iteratively merge the pair with the minimum cost until the termination condition is satisfied. Then, we optimize the boundary of superpixels by swapping pixels according to their swapping cost to improve the compactness. Due to the quadratic nature of the energy function, each of these atomic operations has only O(1) time complexity. We compare the new method with other state‐of‐the‐art superpixel generation algorithms on two datasets, and our algorithm demonstrates superior performance.  相似文献   

20.
This paper presents a new two‐step color transfer method which includes color mapping and detail preservation. To map source colors to target colors, which are from an image or palette, the proposed similarity‐preserving color mapping algorithm uses the similarities between pixel color and dominant colors as existing algorithms and emphasizes the similarities between source image pixel colors. Detail preservation is performed by an ?0 gradient‐preserving algorithm. It relaxes the large gradients of the sparse pixels along color region boundaries and preserves the small gradients of pixels within color regions. The proposed method preserves source image color similarity and image details well. Extensive experiments demonstrate that the proposed approach has achieved a state‐of‐art visual performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号