首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper introduces an efficient approach to protect the ownership by hiding the iris data into a digital image for authentication purposes. The idea is to secretly embed an iris code data into the content of the image, which identifies the owner. Algorithms based on Biologically inspired Spiking Neural Networks, called Pulse Coupled Neural Network (PCNN) are first applied to increase the contrast of the human iris image and adjust the intensity with the median filter. It is followed by the PCNN segmentation algorithm to determine the boundaries of the human iris image by locating the pupillary boundary and limbus boundary of the human iris for further processing. A texture segmentation algorithm for isolating the iris from the human eye in a more accurate and efficient manner is presented. A quad tree wavelet transform is first constructed to extract the texture feature. Then, the Fuzzy c-Means (FCM) algorithm is applied to the quad tree in the coarse-to-fine manner by locating the pupillary boundary (inner) and outer (limbus) boundary for further processing. Then, iris codes (watermark) are extracted that characterizes the underlying texture of the human iris by using wavelet theory. Then, embedding and extracting watermarking methods based on Discrete Wavelet Transform (DWT) to insert and extract the generated iris code are presented. The final process deals with the authentication process. In the authentication process, Hamming distance metric that measure the variation between the recorded iris code and the corresponding extracted one from the watermarked image (Stego image) to test weather the Stego image has been modified or not is presented. Simulation results show the effectiveness and efficiency of the proposed approach.  相似文献   

2.
Iris segmentation in non-ideal images using graph cuts   总被引:1,自引:0,他引:1  
A non-ideal iris image segmentation approach based on graph cuts is presented that uses both the appearance and eye geometry information. A texture measure based on gradients is computed to discriminate between eyelash and non-eyelash regions, combined with image intensity differences between the iris, pupil, and the background (region surrounding the iris) are utilized as cues for segmentation. The texture and intensity distributions for the various regions are learned from histogramming and explicit sampling of the pixels estimated to belong to the corresponding regions. The image is modeled as a Markov Random Field and the energy minimization is achieved via graph cuts to assign each image pixel one of the four possible labels: iris, pupil, background, and eyelash. Furthermore, the iris region is modeled as an ellipse, and the best fitting ellipse to the initial pixel based iris segmentation is computed to further refine the segmented region. As a result, the iris region mask and the parameterized iris shape form the outputs of the proposed approach that allow subsequent iris recognition steps to be performed for the segmented irises. The algorithm is unsupervised and can deal with non-ideality in the iris images due to out-of-plane rotation of the eye, iris occlusion by the eyelids and the eyelashes, multi-modal iris grayscale intensity distribution, and various illumination effects. The proposed segmentation approach is tested on several publicly available non-ideal near infra red (NIR) iris image databases. We compare both the segmentation error and the resulting recognition error with several leading techniques, demonstrating significantly improved results with the proposed technique.  相似文献   

3.
图像纹理区是指在进行边缘检测时边缘分布相对密集,并存在一些伪边缘的区域。研究表明,现有的很多图像处理算法的误差集中在纹理区。图像纹理区分割的目的就是将这一区域分割出来以便对其采用不同的处理方法。本文提出了一种基于模糊增强的图像纹理区检测及分割算法。本文算法根据图像纹理区特点,首先增强纹理区像素对比度,并利用Canny边缘检测算法提高纹理区检测效果,最终实现了图像纹理区的准确检测和分割。  相似文献   

4.
Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.  相似文献   

5.
This research presents an object‐oriented technique for habitat classification at different segmentation levels based on the use of imagery from an Edgetech 272 side scan sonar. We investigate the success of object parameters such as shape and size as well as texture in discriminating reef from sand habitat. The results are evaluated using traditional digitization, based on visual assessment of the sidescan imagery, and video transects. Whereas the application of traditional pixel‐based classification results in a pixelized (salt and pepper) representation of habitat distribution, the object‐based classification technique results in habitat objects (raster or vector). The object‐oriented classification results are cross‐validated using confusion matrices in image classification software and error matrices from underwater video transects showing an overall accuracy of 80% based on two classes within the image at three segmentation levels and an overall accuracy of 60% based on three classes at two segmentation levels. This is compared with the digitized layer accuracy of 81% for two classes and 72% for three classes, and this demonstrates the successful application of object‐oriented methods for habitat mapping. This technique retains spatially discrete habitat pattern information in a classified vector shape file with methods that are automated, repeatable, objective, and capable of processing many sidescan records in a more efficient manner.  相似文献   

6.
纹理图象的分割分类方法是目前图象处理和机器视觉研究中的一个前沿课题,传统方法大多基于形态结构和统计描述,与人类视觉机理相脱节,无法进一步提高分精度。本文介绍了近几年来兴起的一类全新的方法,即基于空间/空间频率(s/sf)平面的多信道滤波法,这类方法与人类视觉机理很好的吻合,对于人工纹理和自然纹理都能获很好的分割效果。  相似文献   

7.
Color image segmentation: advances and prospects   总被引:57,自引:0,他引:57  
H. D.  X. H.  Y.  Jingli 《Pattern recognition》2001,34(12):2259-2281
Image segmentation is very essential and critical to image processing and pattern recognition. This survey provides a summary of color image segmentation techniques available now. Basically, color segmentation approaches are based on monochrome segmentation approaches operating in different color spaces. Therefore, we first discuss the major segmentation approaches for segmenting monochrome images: histogram thresholding, characteristic feature clustering, edge detection, region-based methods, fuzzy techniques, neural networks, etc.; then review some major color representation methods and their advantages/disadvantages; finally summarize the color image segmentation techniques using different color representations. The usage of color models for image segmentation is also discussed. Some novel approaches such as fuzzy method and physics-based method are investigated as well.  相似文献   

8.
Texture classification is one of the most important tasks in computer vision field and it has been extensively investigated in the last several decades. Previous texture classification methods mainly used the template matching based methods such as Support Vector Machine and k-Nearest-Neighbour for classification. Given enough training images the state-of-the-art texture classification methods could achieve very high classification accuracies on some benchmark databases. However, when the number of training images is limited, which usually happens in real-world applications because of the high cost of obtaining labelled data, the classification accuracies of those state-of-the-art methods would deteriorate due to the overfitting effect. In this paper we aim to develop a novel framework that could correctly classify textural images with only a small number of training images. By taking into account the repetition and sparsity property of textures we propose a sparse representation based multi-manifold analysis framework for texture classification from few training images. A set of new training samples are generated from each training image by a scale and spatial pyramid, and then the training samples belonging to each class are modelled by a manifold based on sparse representation. We learn a dictionary of sparse representation and a projection matrix for each class and classify the test images based on the projected reconstruction errors. The framework provides a more compact model than the template matching based texture classification methods, and mitigates the overfitting effect. Experimental results show that the proposed method could achieve reasonably high generalization capability even with as few as 3 training images, and significantly outperforms the state-of-the-art texture classification approaches on three benchmark datasets.  相似文献   

9.
Transformer模型在自然语言处理领域取得了很好的效果,同时因其能够更好地连接视觉和语言,也激发了计算机视觉界的极大兴趣。本文总结了视觉Transformer处理多种识别任务的百余种代表性方法,并对比分析了不同任务内的模型表现,在此基础上总结了每类任务模型的优点、不足以及面临的挑战。根据识别粒度的不同,分别着眼于诸如图像分类、视频分类的基于全局识别的方法,以及目标检测、视觉分割的基于局部识别的方法。考虑到现有方法在3种具体识别任务的广泛流行,总结了在人脸识别、动作识别和姿态估计中的方法。同时,也总结了可用于多种视觉任务或领域无关的通用方法的研究现状。基于Transformer的模型实现了许多端到端的方法,并不断追求准确率与计算成本的平衡。全局识别任务下的Transformer模型对补丁序列切分和标记特征表示进行了探索,局部识别任务下的Transformer模型因能够更好地捕获全局信息而取得了较好的表现。在人脸识别和动作识别方面,注意力机制减少了特征表示的误差,可以处理丰富多样的特征。Transformer可以解决姿态估计中特征错位的问题,有利于改善基于回归的方法性能,还减少了三维估计时深度映射所产生的歧义。大量探索表明视觉Transformer在识别任务中的有效性,并且在特征表示或网络结构等方面的改进有利于提升性能。  相似文献   

10.
A system of methods for the detection and segmentation of iris in frontal eye images is presented. Input data are images used in modern iris recognition systems. Coordinates of outer and inner iris borders and the mask of the visible iris region or a decision that the image does not contain the iris of acceptable quality are obtained at the output. The system starts processing with an approximate detection of the eye center followed by an approximate detection of the outer and inner iris borders. If one of these borders is not detected, a further attempt is made to locate it using a different algorithm. Ultimately, the precise borders of the iris are determined at the last steps using specifically designed methods. The system is tested on public iris image databases as well as using the international IREX NIST test.  相似文献   

11.
使用有效的特征提取算法对虹膜纹理进行准确的表达是虹膜识别技术的关键。基于虹膜识别任务的特殊性,提出了用于虹膜特征编码的网络模型IrisCodeNet。该网络架构使用了改进的BasicBlock,并结合了可以扩大决策边界的损失函数AM-Softmax(additive margin softmax)。为了获取最佳的虹膜识别效果,对AM-Softmax的参数设置、虹膜图像预处理输入形式、数据增强方式、网络输入尺寸做了细致的研究。实验结果表明:使用IrisCodeNet训练得到的特征提取器在CASIA-Iris-Thousand、CASIA-Iris-Distance、IITD虹膜数据库上进行测试,所评估的等错误率(equal error rate,EER)和正确接受率(true acceptance rate,TAR)均远远超过了广泛应用的传统算法。特别地,IrisCodeNet无需传统的虹膜归一化或精确的虹膜分割步骤依然取得了极好的识别效果。并且使用Grad-CAM(gradient-weighted class activation mapping)算法进行了可视化分析,结果表明该网络框架有效地关注了虹膜纹理信息,从而证明了IrisCodeNet具有较强的虹膜纹理特征提取能力。  相似文献   

12.
在图像的采集过程中,图像往往会带有一定的噪声信息,这些噪声信息会破坏图像的纹理结构,进而干扰语义分割任务.现有基于带噪图像的语义分割方法,大都是采取先去噪再分割的模型.然而,这种方式会导致在去噪任务中丢失语义信息,从而影响分割任务.为了解决该问题,提出了一种多尺度多阶段特征融合的带噪图像语义分割的方法,利用主干网络中各阶段的高级语义信息以及低级图像信息来强化目标轮廓语义信息.通过构建阶段性协同的分割去噪块,迭代协同分割和去噪任务,进而捕获更准确的语义特征.在PASCAL VOC 2012和Cityscapes数据集上进行了定量评估,实验结果表明,在不同方差的噪声干扰下,模型依旧取得了较好的分割结果.  相似文献   

13.
针对在可见光虹膜识别中存在的虹膜纹理特征不明显与反射光斑问题,提出一种基于多任务学习的可见光与近红外无监督融合模型(MTIris-Fusion)。设计了基于改进DenseU-Net的端到端融合骨干网。设计了自适应权衡源图像重要信息保留度的损失函数,自适应保持融合结果与源图像之间的相似度,达到无监督的目的。通过弹性权重巩固(EWC)机制更新多融合任务的权重,避免了多任务网络中的灾难性遗忘。在PolyU Cross Spectral Iris数据集上的实验表明,与其他方法相比,该方法兼顾可见光虹膜的颜色纹理与近红外图像的结构信息并有效抑制了可见光图像中的光斑噪声,在虹膜图像质量增强领域具有重要应用价值。  相似文献   

14.
基于纹理基元的图象分割   总被引:5,自引:0,他引:5       下载免费PDF全文
纹理分割是图象处理的基本问题之一.针对广泛的纹理图象,需要一个高效、鲁棒的分割方法,因此提出了一种基于纹理基元的纹理图象分割算法.首先,以Harr小波为变换工具,得到具有方向性的纹理子图象;然后给出了一种新的纹理基元提取方法,并在此基础上,应用统计方法和矢量场,对纹理区域进行由粗到细的分割.通过这种方法不仅可以对纹理图象进行分割,还可以对同一区域的纹理结构进行描述,从而有利于在这种分割方法基础上,进行更高层次的图象处理.  相似文献   

15.
彩色图像分割方法综述   总被引:145,自引:4,他引:145       下载免费PDF全文
由于彩色图像提供了比灰度图像更为丰富的信息,因此彩色图像处理正受到人们越来越多的关注。彩色图像分割是彩色图像处理的重要问题,彩色图像分割可以看成是灰度图像分割技术在各种颜色空间上的应用,为了使该领域的研究人员对当前各种彩色图像分割方法有较全面的了解,因此对各种彩色图像分割方法进行了系统论述,即先对各种颜色空间进行简单介绍,然后对直方图阈值法、特征空间聚类、基于区域的方法、边缘检测、模糊方法、神经元网络、基于物理模型方法等主要的彩色图像分割技术进行综述,并比较了它们的优缺点,通过比较发现模糊技术由于能很好地表达和处理不确定性问题,因此在彩色图像分割领域会有更广阔的应用前景。  相似文献   

16.
The contourlet transform is an emerging multiscale multidirection image processing technique. It effectively represents smooth curvature details typical of natural images, overcoming a major drawback of the 2-D wavelet transform. Previously, we developed a contourlet image model, that is, the contourlet contextual hidden Markov model (C-CHMM). In this paper, we further develop a multiscale texture segmentation technique based on the C-CHMM. The segmentation method combines a model comparison approach with a multiscale fusion and a neighbor combination process. It also features a neighborhood selection scheme based on smoothed context maps, for both model estimation and neighbor combination. Through a series of segmentation experiments, we examine the effectiveness of the C-CHMM in comparison with closely related models. We also investigate how different context designs affect the segmentation performance. Moreover, we show that the C-CHMM based technique provides improved accuracy in segmenting texture patterns of diversified nature, as compared with popular methods such as the HMTseg and the JMCMS. All these simulation experiments demonstrate the great potential of the C-CHMM for image analysis applications.  相似文献   

17.
Information-Theoretic Active Polygons for Unsupervised Texture Segmentation   总被引:4,自引:0,他引:4  
Curve evolution models used in image segmentation and based on image region information usually utilize simple statistics such as means and variances, hence can not account for higher order nature of the textural characteristics of image regions. In addition, the object delineation by active contour methods, results in a contour representation which still requires a substantial amount of data to be stored for subsequent multimedia applications such as visual information retrieval from databases. Polygonal approximations of the extracted continuous curves are required to reduce the amount of data since polygons are powerful approximators of shapes for use in later recognition stages such as shape matching and coding. The key contribution of this paper is the development of a new active contour model which nicely ties the desirable polygonal representation of an object directly to the image segmentation process. This model can robustly capture texture boundaries by way of higher-order statistics of the data and using an information-theoretic measure and with its nature of the ordinary differential equations. This new variational texture segmentation model, is unsupervised since no prior knowledge on the textural properties of image regions is used. Another contribution in this sequel is a new polygon regularizer algorithm which uses electrostatics principles. This is a global regularizer and is more consistent than a local polygon regularization in preserving local features such as corners.Supported by NSF grant CCR-0133736.Partially supported by AFOSR grant F49620-98-1-0190 and NSF grant CCR-9984067.  相似文献   

18.
Biometric research has experienced significant advances in recent years given the need for more stringent security requirements. More important is the need to overcome the rigid constraints necessitated by the practical implementation of sensible but effective security methods such as iris recognition. An inventive iris acquisition method with less constrained image taking conditions can impose minimal to no constraints on the iris verification and identification process as well as on the subject. Consequently, to provide acceptable measures of accuracy, it is critical for such an iris recognition system to be complemented by a robust iris segmentation approach to overcome various noise effects introduced through image capture under different recording environments and scenarios. This research introduces a robust and fast segmentation approach towards less constrained iris recognition using noisy images contained in the UBIRIS.v2 database (the second version of the UBIRIS noisy iris database). The proposed algorithm consists of five steps, which include: (1) detecting the approximate localization of the eye area of the noisy image captured at the visible wavelength using the extracted sclera area, (2) defining the outer iris boundary which is the boundary between iris and sclera, (3) detecting the upper and lower eyelids, (4) conducting the verification and correction for outer iris boundary detection and (5) detecting the pupil area and eyelashes and providing means for verification of the reliability of the segmentation results. The results demonstrate that the accuracy is estimated as 98% when using 500 randomly selected images from the UBIRIS.v2 partial database, and estimated at ?97%97% in a “Noisy Iris Challenge Evaluation (NICE.I)” in an international competition that involved 97 participants worldwide, ranking this research group in sixth position. This accuracy is achieved with a processing speed nearing real time.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号