首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 787 毫秒
1.
以彩色数字栅格地图道路数据为处理对象,提出一种基于可变矩形跟踪框技术快速矢量化道路类线要素的方法。在确定道路矢量化规则的基础上,通过采用可变矩形跟踪框技术,结合端点(交点)判断及道路惯性延伸点判断方法实行栅格地图道路的快速矢量化,并克服了当前几种常用矢量化方法的不足。实验结果表明该方法的矢量化效果十分理想。  相似文献   

2.
Vectorizing line drawing is necessary for the digital workflows of 2D animation and engineering design. But it is challenging due to the ambiguity of topology, especially at junctions. Existing vectorization methods either suffer from low accuracy or cannot deal with high‐resolution images. To deal with a variety of challenging containing different kinds of complex junctions, we propose a two‐phase line drawing vectorization method that analyzes the global and local topology. In the first phase, we subdivide the lines into partial curves, and in the second phase, we reconstruct the topology at junctions. With the overall topology estimated in the two phases, we can trace and vectorize the curves. To qualitatively and quantitatively evaluate our method and compare it with the existing methods, we conduct extensive experiments on not only existing datasets but also our newly synthesized dataset which contains different types of complex and ambiguous junctions. Experimental statistics show that our method greatly outperforms existing methods in terms of computational speed and achieves visually better topology reconstruction accuracy.  相似文献   

3.
一种位图矢量化新方法   总被引:5,自引:0,他引:5  
在图像的矢量化过程中,是否精确有效获取图像的几何特征点直接影响到矢量化效果,而已有的研究中并没有十分有效的方法。文章通过对位图边界轮廓进行合理分解,找出主要轮廓特征,得到轮廓关键特征点;并且用原始轮廓点对特征点进行插值,不但能够很好地保留图像的原始信息特征,失真小,还能适应多种形式的图像矢量化算法。在CA D/CA M中的图像运用中取得较好效果。  相似文献   

4.
A three-stage method for compressing bi-level line-drawing images is proposed. In the first stage, the raster image is vectorised using a combination skeletonising and line tracing algorithm. A feature image is then reconstructed from the vector elements extracted. In the second stage, the original image is processed by a feature-based filter for removing noise in the objects out-line. This improves image quality and compression performance. In the final stage, the filtered raster image is compressed using a standard compression technique, JBIG. For a set of test images, the method achieves a compression ratio of 40:1, in comparison to 33:1 of JBIG. Received: 25 November 1998, Received in revised form: 1 April 1999, Accepted: 12 April 1999  相似文献   

5.
Topology has been an important tool for analyzing scalar data and flow fields in visualization. In this work, we analyze the topology of multivariate image and volume data sets with discontinuities in order to create an efficient, raster-based representation we call IStar. Specifically, the topology information is used to create a dual structure that contains nodes and connectivity information for every segmentable region in the original data set. This graph structure, along with a sampled representation of the segmented data set, is embedded into a standard raster image which can then be substantially downsampled and compressed. During rendering, the raster image is upsampled and the dual graph is used to reconstruct the original function. Unlike traditional raster approaches, our representation can preserve sharp discontinuities at any level of magnification, much like scalable vector graphics. However, because our representation is raster-based, it is well suited to the real-time rendering pipeline. We demonstrate this by reconstructing our data sets on graphics hardware at real-time rates.  相似文献   

6.
Image spam is unsolicited bulk email, where the message is embedded in an image. Spammers use such images to evade text-based filters. In this research, we analyze and compare two methods for detecting spam images. First, we consider principal component analysis (PCA), where we determine eigenvectors corresponding to a set of spam images and compute scores by projecting images onto the resulting eigenspace. The second approach focuses on the extraction of a broad set of image features and selection of an optimal subset using support vector machines (SVM). Both of these detection strategies provide high accuracy with low computational complexity. Further, we develop a new spam image dataset that cannot be detected using our PCA or SVM approach. This new dataset should prove valuable for improving image spam detection capabilities.  相似文献   

7.
Color scribbling is a unique form of illustration where artists use compact, overlapping, and monochromatic scribbles at microscopic scale to create astonishing colorful images at macroscopic scale. The creation process is skill‐demanded and time‐consuming, which typically involves drawing monochromatic scribbles layer‐by‐layer to depict true‐color subjects using a limited color palette delicately. In this work, we present a novel computational framework for automatic generation of color scribble images from arbitrary raster images. The core contribution of our work lies in a novel color dithering model tailor‐made for synthesizing a smooth color appearance using multiple layers of overlapped monochromatic strokes. Specifically, our system reconstructs the appearance of the input image by (i) generating layers of monochromatic scribbles based on a limited color palette derived from input image, and (ii) optimizing the drawing sequence among layers to minimize the visual color dissimilarity between dithered image and original image as well as the color banding artifacts. We demonstrate the effectiveness and robustness of our algorithm with various convincing results synthesized from a variety of input images with different stroke patterns. The experimental study further shows that our approach faithfully captures the scribble style and the color presentation at respectively microscopic and macroscopic scales, which is otherwise difficult for state‐of‐the‐art methods.  相似文献   

8.
In this paper, we argue to learn dissimilarity for interactive search in content based image retrieval. In literature, dissimilarity is often learned via the feature space by feature selection, feature weighting or by adjusting the parameters of a function of the features. Other than existing techniques, we use feedback to adjust the dissimilarity space independent of feature space. This has the great advantage that it manipulates dissimilarity directly. To create a dissimilarity space, we use the method proposed by Pekalska and Duin, selecting a set of images called prototypes and computing distances to those prototypes for all images in the collection. After the user gives feedback, we apply active learning with a one-class support vector machine to decide the movement of images such that relevant images stay close together while irrelevant ones are pushed away (the work of Guo ). The dissimilarity space is then adjusted accordingly. Results on a Corel dataset of 10000 images and a TrecVid collection of 43907 keyframes show that our proposed approach is not only intuitive, it also significantly improves the retrieval performance.  相似文献   

9.
New applications are emerging every day exploiting the huge data volume in community photo collections. Most focus on popular subsets, e.g., images containing landmarks or associated to Wikipedia articles. In this work we are concerned with the problem of accurately finding the location where a photo is taken without needing any metadata, that is, solely by its visual content. We also recognize landmarks where applicable, automatically linking them to Wikipedia. We show that the time is right for automating the geo-tagging process, and we show how this can work at large scale. In doing so, we do exploit redundancy of content in popular locations—but unlike most existing solutions, we do not restrict to landmarks. In other words, we can compactly represent the visual content of all thousands of images depicting e.g., the Parthenon and still retrieve any single, isolated, non-landmark image like a house or a graffiti on a wall. Starting from an existing, geo-tagged dataset, we cluster images into sets of different views of the same scene. This is a very efficient, scalable, and fully automated mining process. We then align all views in a set to one reference image and construct a 2D scene map. Our indexing scheme operates directly on scene maps. We evaluate our solution on a challenging one million urban image dataset and provide public access to our service through our online application, VIRaL.  相似文献   

10.
《Computers & Geosciences》2006,32(2):184-194
The evolution of open standards and especially those pertaining to the family of XML technologies, have a considerable impact on the way the Geomatics community addresses the acquisition, storage, analysis and display of spatial data. The most recent version of the GML specification enables the merging of vector and raster data into a single “open” format. The notion of “coverage” as described in GML 3.0 can be the equivalent of a raster multi-band dataset. In addition, vector data storage is also described in detail through the GML Schemas and XML itself can store the values of a raster dataset, as values of a multi-table dataset. Under these circumstances an issue that must be addressed is the transformation of raster data into XML format and their subsequent visualization through SVG. The objective of this paper is to give an overview of the steps that can be followed in order to embody open standards and XML technologies in the raster domain. The last part of the work refers to a case study that suggests a step by step methodology to accomplish classification, an important function in Cartography and Remote Sensing, using the XML-encoded images.  相似文献   

11.
12.
13.
This paper presents a benchmark for evaluating the raster to vector conversion systems. The benchmark is designed for evaluating the performance of graphics recognition systems on images that contain polygons (solid) within the images. Our contribution is two-fold, an object mapping algorithm to spatially locate errors within the drawing and then a cycle graph matching distance that indicates the accuracy of the polygonal approximation. The performance incorporates many aspects and factors based on uniform units while the method remains non-rigid (thresholdless). This benchmark gives a scientific comparison at polygon level of coherency and uses practical performance evaluation methods that can be applied to complete polygonization systems. A system dedicated to cadastral map vectorization was evaluated under this benchmark and its performance results are presented in this paper. By stress testing a given system, we demonstrate that our protocol can reveal strengths and weaknesses of a system. The behavior of our set of indices was analyzed when increasing image degradation. We hope that this benchmark will help assessing the state of the art in graphics recognition and current vectorization technologies.  相似文献   

14.
杜海顺  蒋曼曼  王娟  王胜 《计算机科学》2017,44(10):302-306, 311
农作物病害是我国主要的农业灾害之一,准确识别病害类型是防治农作物病害的关键。因此,首先采集了小麦、玉米、花生、棉花4种农作物的22种常见叶部病害的441张图像;然后,在对每张病害图像中的叶片和病斑进行分割的基础上,分别提取了描述农作物种类的叶片特征参数和描述病害类型的病斑特征参数;其次,将这两类特征参数组合并作归一化处理,得到病害图像的数据特征向量;再次,采用所有病害图像的数据特征向量,构建了一个农作物叶部病害数据集;最后,在同时考虑数据特征重要性和数据空间局部性的基础上,提出了一种双权重协同表示分类(DWCRC)方法并将其用于农作物叶部病害识别。在农作物叶部病害数据集上的实验结果表明,提出的双权重协同表示分类方法在用于农作物叶部病害识别时具有较高的识别率。  相似文献   

15.
为了对提花织物图像进行矢量化,针对其颜色少、色块大的特点,提出了改进的Potrace图像矢量化算法。原始的Potrace算法只能实现对二值图像的矢量化,改进后的算法将位图中的色块逐个分解生成一个个的闭合路径,之后将这些闭合路径按照其各自分布拼接成树状结构并矢量化,最终生成一个完整的矢量图形。该算法在实际的应用中取得较好效果。  相似文献   

16.
Vector graphic gives us a new solution to the representation of raster images. Among many types of vectorized representations, the most popular is mesh representation, which inherits the benefits of vector graphics. Inspired by mesh, we propose a novel patch-based representation for raster images, in which pixels are partitioned into regions, and pixels belonging to the same region are converted into a 3D point cloud and approximated by a 3D planar patch with proper boundaries in a variational way. The resulting patches are then encoded via a half-edge structure for storage. The key point is that the vertices of boundaries are not located on the very positions of sample points, i.e. converted pixels, but dependent on the optimal position of the patch, which theoretically reduces the fitting errors. Experiments show that our algorithm produces better results.  相似文献   

17.
18.
Suppose that we have a matrix of dissimilarities between n images of a database. For a new image, we would like to select the most similar image of our database. Because it may be too expensive to compute the dissimilarities for the new object to all images of our database, we want to find p?n “vantage objects” (Pattern Recognition 35 (2002) 69) from our database in order to select a matching image according to the least Euclidean distance between the vector of dissimilarities between the new image and the vantage objects and the corresponding vector for the images of the database. In this paper, we treat the choice of suitable vantage objects. We suggest a loss measure to assess the quality of a set of vantage objects: For every image, we select a matching image from the remaining images of the database by use of the vantage set, and we average the resulting dissimilarities. We compare two classes of choice strategies: The first one is based on a stepwise forward selection of vantage objects to optimize the loss measure. The second is to choose objects as representative as possible for the whole range of the database.  相似文献   

19.
For embedded applications with data-level parallelism, a vector processor offers high performance at low power consumption and low design complexity. Unlike superscalar and VLIW designs, a vector processor is scalable and can optimally match specific application requirements.To demonstrate that vector architectures meet the requirements of embedded media processing, we evaluate the Vector IRAM, or VIRAM (pronounced "V-IRAM"), architecture developed at UC Berkeley, using benchmarks from the Embedded Microprocessor Benchmark Consortium (EEMBC). Our evaluation covers all three components of the VIRAM architecture: the instruction set, the vectorizing compiler, and the processor microarchitecture. We show that a compiler can vectorize embedded tasks automatically without compromising code density. We also describe a prototype vector processor that outperforms high-end superscalar and VLIW designs by 1.5x to 100x for media tasks, without compromising power consumption. Finally, we demonstrate that clustering and modular design techniques let a vector processor scale to tens of arithmetic data paths before wide instruction-issue capabilities become necessary.  相似文献   

20.
Spectral Monte‐Carlo methods are currently the most powerful techniques for simulating light transport with wavelength‐dependent phenomena (e.g., dispersion, colored particle scattering, or diffraction gratings). Compared to trichromatic rendering, sampling the spectral domain requires significantly more samples for noise‐free images. Inspired by gradient‐domain rendering, which estimates image gradients, we propose spectral gradient sampling to estimate the gradients of the spectral distribution inside a pixel. These gradients can be sampled with a significantly lower variance by carefully correlating the path samples of a pixel in the spectral domain, and we introduce a mapping function that shifts paths with wavelength‐dependent interactions. We compute the result of each pixel by integrating the estimated gradients over the spectral domain using a one‐dimensional screened Poisson reconstruction. Our method improves convergence and reduces chromatic noise from spectral sampling, as demonstrated by our implementation within a conventional path tracer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号