首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
目的 现有的车标识别算法均为各种经典的图像特征算子结合不同的分类器组合而成,均未分析车标图像的结构特点。综合考虑车标图像的灰度特征和结构特征,提出了一种前背景骨架区域随机点对策略驱动下的车标识别方法。方法 本文算法将标准车标图像分为前景区域和背景区域,分别提取前、背景的骨架区域,在其中进行随机取点,形成点对,通过进行点对的有效性判断,提取能表示车标的点对特征。点对特征表示两点周围局部区域的相似关系,反映了实际车标成像过程中车标图案部分与背景部分的灰度明暗关系。结果 在卡口系统截取的19 044张车标图像上进行实验,结果表明,与其他仅基于灰度特征的识别方法相比,本文提出的点对特征识别方法具有更好的识别效果,识别率达到了95.7%。在弱光照条件下,本文算法的识别算法效果同样优于其他仅基于灰度特征的识别方法,识别率达到了87.2%。结论 本文提出的前背景骨架区域随机点对策略驱动下的车标识别方法,结合了车标图像的灰度特征和结构特征,在进行车标的描述上具有独特性和排他性,有效地提高了车标的识别率,尤其是在弱光照条件下,本文方法具有更强的鲁棒性。  相似文献   

2.
In this paper we show how to use two‐colored pixels as a generic tool for image processing. We apply two‐colored pixels as a basic operator as well as a supporting data structure for several image processing applications. Traditionally, images are represented by a regular grid of square pixels with one constant color each. In the two‐colored pixel representation, we reduce the image resolution and replace blocks of N × N pixels by one square that is split by a (feature) line into two regions with constant colors. We show how the conversion of standard mono‐colored pixel images into two‐colored pixel images can be computed efficiently by applying a hierarchical algorithm along with a CUDA‐based implementation. Two‐colored pixels overcome some of the limitations that classical pixel representations have, and their feature lines provide minimal geometric information about the underlying image region that can be effectively exploited for a number of applications. We show how to use two‐colored pixels as an interactive brush tool, achieving realtime performance for image abstraction and non‐photorealistic filtering. Additionally, we propose a realtime solution for image retargeting, defined as a linear minimization problem on a regular or even adaptive two‐colored pixel image. The concept of two‐colored pixels can be easily extended to a video volume, and we demonstrate this for the example of video retargeting.  相似文献   

3.
对骨架算法进行研究,提出一种骨架提取算法.通过对图像内部像素点进行距离变换得到其最近边界点的位置,将内部像素点到最近边界点的向量定义为边界向量,根据物体内部相邻边界向量的方向,计算每个像素点的内积值和其8邻域的最小内积值,得到的最小内积点,以确定的阈值从最小内积点中选取骨架种子点,再对骨架种子点进行处理,得到连通的骨架.试验证明这种算法能保证骨架具的完整性和连通性,正确反映物体的拓扑结构.  相似文献   

4.
针对低压电流互感器表面裂纹的提取与判定,提出基于渗透算法和改进型OPTA(One-Pass Thinning Algorithm)的互感器表面裂纹检测算法。首先获取互感器表面的灰度图像;其次根据裂纹区域像素值、亮度变化,通过设定种子像素点、亮度阈值,利用渗透算法渗透得到二值图;再次从裂纹连通性入手,利用改进型OPTA提取ROI(Region of Interest)的骨架,骨架由单像素点组成;最后利用裂纹具有分叉性的特点,像素点的邻域点个数超过2的即可判定为裂纹。实验表明,渗透算法能够有效地从图像中提取出ROI,并保持了ROI的线性特征,改进型OPTA使ROI完全细化为单像素图像,提出的邻域点判别法检测效率在97%以上,相较于所提其他检测方法有明显提高。  相似文献   

5.
提出一种基于定性视觉特征的裂纹缺陷x光图像的仿真方法.从裂纹的定性视觉特征提取语义,进行逻辑描述,生成裂纹缺陷的骨架,对骨架各个像素分别进行运动模糊滤波,由所需裂纹的大致宽度对模糊骨架进行偏移后加权叠加.根据裂纹缺陷整体走向对整幅图进行运动模糊滤波,引入模糊几何特征,以调整仿真算法参数.实验结果表明,该方法可得到具备模糊性和随机性的各类裂纹缺陷.  相似文献   

6.
7.
The skeleton is essential for general shape representation. The commonly required properties of a skeletonization algorithm are that the extracted skeleton should be accurate; robust to noise, position and rotation; able to reconstruct the original object; and able to produce a connected skeleton in order to preserve its topological and hierarchical properties. However, the use of a discrete image presents a lot of problems that may influence the extraction of the skeleton. Moreover, most of the methods are memory-intensive and computationally intensive, and require a complex data structure.In this paper, we propose a fast, efficient and accurate skeletonization method for the extraction of a well-connected Euclidean skeleton based on a signed sequential Euclidean distance map. A connectivity criterion is proposed, which can be used to determine whether a given pixel is a skeleton point independently. The criterion is based on a set of point pairs along the object boundary, which are the nearest contour points to the pixel under consideration and its 8 neighbors. Our proposed method generates a connected Euclidean skeleton with a single pixel width without requiring a linking algorithm or iteration process. Experiments show that the runtime of our algorithm is faster than the distance transformation and is linearly proportional to the number of pixels of an image.  相似文献   

8.
Conventional image skeletonization techniques implicitly assume the pixel level connectivity. However, noise inside the object regions destroys the connectivity and exhibits sparseness in the image. We present a skeletonization algorithm designed for these kinds of sparse shapes. The skeletons are produced quickly by using three operations. First, initial skeleton nodes are selected by farthest point sampling with circles containing the maximum effective information. A skeleton graph of these nodes is imposed via inheriting the neighborhood of their associated pixels, followed by an edge collapse operation. Then a skeleton tting process based on feature-preserving Laplacian smoothing is applied. Finally, a re nement step is proposed to further improve the quality of the skeleton and deal with noise or different local shape scales. Numerous experiments demonstrate that our algorithm can effectively handle several disconnected shapes in an image simultaneously, and generate more faithful skeletons for shapes with intersections or different local scales than classic methods.  相似文献   

9.
Monte Carlo rendering is widely used in the movie industry. Since it is costly to produce noise‐free results directly, Monte Carlo denoising is often applied as a post‐process. Recently, deep learning methods have been successfully leveraged in Monte Carlo denoising. They are able to produce high quality denoised results, even with very low sample rate, e.g. 4 spp (sample per pixel). However, for difficult scene configurations, some details could be blurred in the denoised results. In this paper, we aim at preserving more details from inputs rendered with low spp. We propose a novel denoising pipeline that handles three‐scale features ‐ pixel, sample and path ‐ to preserve sharp details, uses an improved Res2Net feature extractor to reduce the network parameters and a smooth feature attention mechanism to remove low‐frequency splotches. As a result, our method achieves higher denoising quality and preserves better details than the previous methods.  相似文献   

10.
在对裂缝图像骨架进行提取时,已有的算法通常存在细化后骨架主体信息缺失、毛刺去除效果随图像规模增大而快速下降等问题。针对上述问题,该算法提出一种模板匹配与高适应性的裂缝骨架提取算法。首先,结合模板匹配对Rosenfeld细化算法进行改进,以保留骨架主体结构;然后提出一种高适应性毛刺去除算法,以分支像素点数量与细化后骨架图像目标像素点数量之比作为判断标准,可以高效适应不同目标像素点密度和规模的裂缝图像。实验结果表明,该算法能够有效实现单一像素宽度骨架并尽可能去除骨架毛刺,有一定的可行性及优越性。  相似文献   

11.
一种方向链码扫描与跟踪的图像细化后期处理算法   总被引:1,自引:0,他引:1  
目标图像骨架的提取是智能分析中的重要组成部分,利用Zhang并行细化算法提取的目标骨架不是单一像素且极易产生毛刺。提出一种获取单一像素并消除毛刺的快速目标图像骨架提取算法。该算法首先对提取得到的目标二值图像进行形态学预处理,然后结合8邻域方向链码扫描编码原理对细化后的图像进行单一像素处理,最后采用优化的8邻域方向链码来消除毛刺。实验结果表明,提出的算法不仅效率高,而且能够很好地获得单一像素宽度、无毛刺的骨架。  相似文献   

12.
基于图象形态学的二值图象编码方法,近来在国际上得到了比较多的研究。为了提高基于骨架的编码方法效率,提出了一种基于形态骨架的子集--终极腐蚀(ultimate erosion)二值图象编码方法。该方法是基于扩展的非骨架点判定定理,使得有更多的点在编码过程中被判为非骨架点,从而提高了编码效率。与其他同类方法相比,这一方法更充分地去除了图象形态分析中的信息冗余,从而得到了很好的压缩效果,其对二值图象“工具”的压缩率达到0.065,大大优于游程码、四分树、链码等方法。  相似文献   

13.
Abstract— A new threshold‐voltage compensation technique for polycrystal line‐silicon thin‐film transistors (poly‐Si TFTs) used in active‐matrix organic light‐emitting‐diode (AMOLED) display pixel circuits is presented. The new technique was applied to a conventional 2‐transistor—1‐capacitor (2T1C) pixel circuit, and a new voltage‐programmed pixel circuit (VPPC) is proposed. Theoretically, the proposed pixel is the fastest pixel with threshold‐voltage compensation reported in the literature because of the new compression technique implemented with a static circuit block, which does not affect the response time of the conventional 2T1C pixel circuit. Furthermore, the new pixel exhibits all the other advantages of the 2T1C pixel, such as the simplicity of the peripheral drivers and improves other characteristics, such as its behavior in the temperature variations. The verification of the proposed pixel is made through simulations with HSpice. In order to obtain realistic simulations, device parameters were extracted from fabricated low‐temperature poly‐Si (LTPS) TFTs.  相似文献   

14.
A Randomized Approach for Patch-based Texture Synthesis using Wavelets   总被引:1,自引:0,他引:1  
We present a wavelet‐based approach for selecting patches in patch‐based texture synthesis. We randomly select the first block that satisfies a minimum error criterion, computed from the wavelet coefficients (using 1D or 2D wavelets) for the overlapping region. We show that our wavelet‐based approach improves texture synthesis for samples where previous work fails, mainly textures with prominent aligned features. Also, it generates similar quality textures when compared against texture synthesis using feature maps with the advantage that our proposed method uses implicit edge information (since it is embedded in the wavelet coefficients) whereas feature maps rely explicitly on edge features. In previous work, the best patches are selected among all possible using a L2 norm on the RGB or grayscale pixel values of boundary zones. The L2 metric provides the raw pixel‐to‐pixel difference, disregarding relevant image structures — such as edges — that are relevant in the human visual system and therefore on synthesis of new textures.  相似文献   

15.
This paper introduces an approach to cosmetic surface flaw identification that is essentially invariant to changes in workpiece orientation and position while being efficient in the use of computer memory. Visual binary images of workpieces are characterized according to the number of pixels in progressive subskeleton iterations. Those subskeletons are constructed using a modified Zhou skeleton transform with disk shaped structuring elements. Two coding schemes are proposed to record the pixel counts of succeeding subskeletons with and without lowpass filtering. The coded pixel counts are on-line fed to a supervised neural network that is previously trained by the backpropagation method using flawed and unflawed simulation patterns. The test workpiece is then identified as flawed or unflawed by comparing its coded pixel counts to associated training patterns. Such off-line trainings using simulated patterns avoid the problems of collecting flawed samples. Since both coding schemes tremendously reduce the representative skeleton image data, significant run time in each epoch is saved in the application of neural networks. Experimental results are reported using six different shapes of workpieces to corroborate the proposed approach.  相似文献   

16.
The paper presents a skeleton‐based approach for robust detection of perceptually salient shape features. Given ashape approximated by a polygonal surface, its skeleton is extracted using a three‐dimensional Voronoi diagramtechnique proposed recently by Amenta et al. [ 3 ]. Shape creases, ridges and ravines, are detected as curvescorresponding to skeletal edges. Salient shape regions are extracted via skeleton decomposition into patches.The approach explores the singularity theory for ridge and ravine detection, combines several filtering methodsfor skeleton denoising and for selecting perceptually important ridges and ravines, and uses a topological analysisof the skeleton for detection of salient shape regions. ACM CSS: I.3.5 Computational Geometry and Object Modeling  相似文献   

17.
Solving aliasing artifacts is an essential problem in shadow mapping approaches. Many works have been proposed, however, most of them focused on removing the texel‐level aliasing that results from the limited resolution of shadow maps. Little work has been done to solve the pixel‐level shadow aliasing that is produced by the rasterization on the screen plane. In this paper, we propose a fast, sub‐pixel antialiased shadowing algorithm to solve the pixel aliasing problem. Our work is based on the alias‐free shadow maps, which is capable of computing accurate per‐pixel shadow, and only incurs little cost to extend to sub‐pixel accuracy. Instead of direct supersampling the screen space, we take facets to approximate pixels in shadow testing. The shadowed area of one facet is rapidly evaluated by projecting blocker geometry onto a supersampled 2D occlusion mask with bitmasks fusion. It provides a sub‐pixel occlusion sampling so as to capture fine shadow details and features. Furthermore, we introduce the silhouette mask map that limits visibility evaluation to pixels only on the silhouette, which greatly reduces the computation cost. Our algorithm runs entirely on the GPU, achieving real‐time performance and is an order of magnitude faster than the brute‐force supersampling method to produce comparable 32× antialiased shadows.  相似文献   

18.
We propose an efficient and robust image‐space denoising method for noisy images generated by Monte Carlo ray tracing methods. Our method is based on two new concepts: virtual flash images and homogeneous pixels. Inspired by recent developments in flash photography, virtual flash images emulate photographs taken with a flash, to capture various features of rendered images without taking additional samples. Using a virtual flash image as an edge‐stopping function, our method can preserve image features that were not captured well only by existing edge‐stopping functions such as normals and depth values. While denoising each pixel, we consider only homogeneous pixels—pixels that are statistically equivalent to each other. This makes it possible to define a stochastic error bound of our method, and this bound goes to zero as the number of ray samples goes to infinity, irrespective of denoising parameters. To highlight the benefits of our method, we apply our method to two Monte Carlo ray tracing methods, photon mapping and path tracing, with various input scenes. We demonstrate that using virtual flash images and homogeneous pixels with a standard denoising method outperforms state‐of‐the‐art image‐space denoising methods.  相似文献   

19.
文章研究了一种毛笔墨迹检测相似度的问题,提出了一种基于骨架上下文的墨迹比对的方法。该方法先对骨架点进行采集,确定虚拟墨迹骨架点与真实墨迹骨架点的一一对应。在这基础上计算骨架点的上下文信息,从而计算对应骨架点的墨迹相似度,最终两种墨迹的相似度就为整个骨架对应点的相似度之和。仿真结果表明,相似度的结果会在0~1之间(归一化处理),值越大,表明两种墨迹越相近;并且骨架点数的多少(一次为50,一次为100),对评估的结果影响不大,两个实验都表明基于骨架点上下文的墨迹比对是种可行的方法。  相似文献   

20.
Based on high‐resolution SAR data, in this paper, a novel automatic matching model is proposed. The model, which employs a coarse to fine strategy as a whole, consists of three steps. In the first step, edge features are extracted on different levels of pyramid images and an efficient Hausdorff distance‐based method is used to yield a coarse global feature match. Due to bi‐tree searching, the bottleneck of Hausdorff distance's matching is well resolved. Secondly, SSDA (Sequence Similarity Detection Algorithm) is employed to acquire tie‐points using a cross‐searching approach which treats features extracted from master and slave images equally. Finally, local‐adaptive splitting algorithm with MMSE (Minimum Mean Square Error) is used to achieve a fine matching; local‐adaptive splitting algorithm is the essential process to achieve sub‐pixel matching accuracy, which enhances the process's flexibility and robustness.

Airborne SAR images with high resolution are provided by the Institute of Electronics, CAS and used for experiments—the results of the experiments demonstrate that the model proposed in this paper is robust, with high accuracy (up to a fraction of a pixel), and can be successfully applied to automatic matching of high‐resolution SAR images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号