首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 610 毫秒
1.
A conventional automatic fingerprint matching process uses similarity score to quantify similarity between fingerprint images to be matched, and the similarity score can be determined with a minutiae extraction algorithm (MEA) which extracts minutiae from fingerprint images. The performance of MEA relies on the quality of fingerprint images. In case of blurred fingerprint images, it becomes difficult to obtain a reliable similarity score. As the result, an image enhancement algorithm should be incorporated with MEA when the fingerprint image is blurred. In this study, Volterra filter is proposed to enhance blurred fingerprints and compared against different enhancement algorithms. Experimental results show that Volterra filter outperforms other techniques such as Laplacian, Wiener, and Gabor filters for enhancing blurred images and its calculation complexity is moderate among techniques considered in this study.  相似文献   

2.
A novel compression algorithm for fingerprint images is introduced. Using wavelet packets and lattice vector quantization , a new vector quantization scheme based on an accurate model for the distribution of the wavelet coefficients is presented. The model is based on the generalized Gaussian distribution. We also discuss a new method for determining the largest radius of the lattice used and its scaling factor , for both uniform and piecewise-uniform pyramidal lattices. The proposed algorithms aim at achieving the best rate-distortion function by adapting to the characteristics of the subimages. In the proposed optimization algorithm, no assumptions about the lattice parameters are made, and no training and multi-quantizing are required. We also show that the wedge region problem encountered with sharply distributed random sources is resolved in the proposed algorithm. The proposed algorithms adapt to variability in input images and to specified bit rates. Compared to other available image compression algorithms, the proposed algorithms result in higher quality reconstructed images for identical bit rates.  相似文献   

3.
We present a technique for irreversible watermarking approach robust to affine transform attacks in camera, biomedical and satellite images stored in the form of monochrome bitmap images. The watermarking approach is based on image normalisation in which both watermark embedding and extraction are carried out with respect to an image normalised to meet a set of predefined moment criteria. The normalisation procedure is invariant to affine transform attacks. The result of watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. Here, direct-sequence code division multiple access approach is used to embed multibit text information in DCT and DWT transform domains. The proposed watermarking schemes are robust against various types of attacks such as Gaussian noise, shearing, scaling, rotation, flipping, affine transform, signal processing and JPEG compression. Performance analysis results are measured using image processing metrics.  相似文献   

4.
In single photon emission computed tomography images, the differences between brains of different subjects require the normalisation of the images with respect to a reference template. The general affine model with 12 parameters is usually chosen as a first normalisation procedure. Usually, the Levenberg-Marquardt or mostly the Gauss-Newton method are used in order to optimise a cost function, which presents an extreme value when the image matches with the template. In this reported work, these optimisation algorithms are compared with two alternative versions of the Gauss-Newton method. Both proposed alternatives include an additional parameter, which allows the adaptive change of the step length along the descent direction. Experimental and simulated results show that the inclusion of this parameter improves the convergence rate considerably  相似文献   

5.
提出和实现了一种随机遥感图像模拟新方法,该方法集成了PROSAIL模型、随机地物类别生成和尺度扩展机制,实现了随机点图像模拟和随机块图像模拟.实验中定量模拟了HJ-1 CCD传感器近红外成像光谱波段图像及具有不同平移、旋转、尺度变化影响的实验图像.利用模拟图像研究了平移、旋转、尺度变化等多种因素对近红外图像配准的影响,通过模拟地物随机分布及尺度变化的随机块图像分析了块效应在图像配准中的作用,并试验将随机模拟图像用于对图像配准算法进行适宜性分析.模拟实验结果表明图像块效应对图像配准处理具有重要影响.  相似文献   

6.
图像纹理合成的研究应用   总被引:1,自引:0,他引:1  
在此介绍了纹理和纹理合成的基本概念,给出了纹理合成常见的几种算法,通过分析块拼接纹理合成算法,对块拼接纹理合成算法进行改进,将块拼接的原理应用到图像拼接上,并对象素加入与人视觉有关的权值进行接缝处理。实验结果表明,该方法简单实用,对于基本的由不同方位拍摄的图像都可以通过该算法进行拼接,并通过对接缝处的处理,取得了较好的结果。  相似文献   

7.
翟春婕  周桂雪 《激光技术》2021,45(5):625-629
为了解决现场指纹图像采集过程中指纹背景颜色复杂、导致对比度和采集效率较低的问题,采用智能手机设计了便携式指纹图像采集装置.一方面将手机连接单片机,电动切换不同波长光源,获得多光谱图像;另一方面自动采集不同光源下的指纹图像.为确保指纹图像质量满足后续识别需求,采用多光谱融合及方向滤波技术对图像进行了增强处理.结果表明,当...  相似文献   

8.
Blocking artifacts often exist in the images compressed by standards, such as JPEG and MPEG, which causes serious image degradation. Many algorithms have been proposed in the last decade to alleviate this degradation by reducing the quantization noise. Unfortunately, these algorithms only produce satisfying results under an unreasonable assumption that noise magnitude has been given. However, in most applications, the user only gets inferior image copy, without any side information about noise distribution, therefore the efficiency of existing denoise algorithms is significantly reduced. In this paper, a new metric is first given to evaluate the blocking artifacts; and then non-local means filter is applied to remove quantization noise on the blocks. During the process, nonlocal means filters with different variances are used to do deblocking, and their efficiencies are recorded as the references. The deblocked image is finally the one combined with all blocks filtered with the optimal parameters. We prove with experimental results that the proposed algorithm constantly outperforms the peer ones on all kinds of images.  相似文献   

9.
指纹图像奇异点附近区域的增强一直是指纹图像增强的难点,针对Separable Gabor滤波会破坏指纹邻近奇异点区域的纹线结构,方向傅里叶滤波在一般区域修复指纹纹线效果不明显这一问题,本文融合两种算法的优势,提出一种新的滤波方法(FS-Gabor)。先对指纹图像进行预处理,得到指纹的方向、频率信息和掩膜信息。接着找出指纹图像的奇异点,并在奇异点附近标记出一定大小区域。最后根据像素点的位置采用不同的滤波方法。同时,本文提出了一种改进的指纹图像频率估计方法,扩大了指纹图像有效区域面积。实验结果表明,经本文方法滤波的指纹图像的EER(Equal Error Rate)比方向傅里叶滤波低26%,比Separable Gabor低49%。  相似文献   

10.
Conventional image hash functions only exploit luminance components of color images to generate robust hashes and then lead to limited discriminative capacities. In this paper, we propose a robust image hash function for color images, which takes all components of color images into account and achieves good discrimination. Firstly, the proposed hash function re-scales the input image to a fixed size. Secondly, it extracts local color features by converting the RGB color image into HSI and YCbCr color spaces and calculating the block mean and variance from each component of the HSI and YCbCr representations. Finally, it takes the Euclidian distances between the block features and a reference feature as hash values. Experiments are conducted to validate the efficiency of our hash function. Receiver operating characteristics (ROC) curve comparisons with two existing algorithms demonstrate that our hash function outperforms the assessed algorithms in classification performances between perceptual robustness and discriminative capability.  相似文献   

11.
本文提出了一种新的指纹图像脊频率的估计方法。该方法通过给定的模板对指纹采样信号卷积,增强了信号峰值,抑制了噪声,从而还原出指纹的脊谷结构,提高了指纹脊频率估计的精确性,有利于指纹识别时的其他后续操作。实验结果表明,与以往的一些脊频率估计算法相比,在精确性和及时性方面,本文所提出的方法用来估计脊频率有明显的优势,更适合于在线指纹识别系统。  相似文献   

12.
With advancement of media editing software, even people who are not image processing experts can easily alter digital images. Various methods of digital image forgery exist, such as image splicing, copy-move forgery, and image retouching. The most common method of tampering with a digital image is copy-move forgery, in which a part of an image is duplicated and used to substitute another part of the same image at a different location. In this paper, we present an efficient and robust method to detect such artifacts. First, the tampered image is segmented into overlapping fixed-size blocks, and the Gabor filter is applied to each block. Thus, the image of Gabor magnitude represents each block. Secondly, statistical features are extracted from the histogram of orientated Gabor magnitude (HOGM) of overlapping blocks, and reduced features are generated for similarity measurement. Finally, feature vectors are sorted lexicographically, and duplicated image blocks are identified by finding similarity block pairs after suitable post-processing. To enhance the algorithm’s robustness, a few parameters are proposed for removing the wrong similar blocks. Experiment results demonstrate the ability of the proposed method to detect multiple examples of copy-move forgery and precisely locate the duplicated regions, even when dealing with images distorted by slight rotation and scaling, JPEG compression, blurring, and brightness adjustment.  相似文献   

13.
针对全采样传统图像融合方法中计算量大、时间复杂度高的问题,提出了一种基于压缩感知(CS)理论的多源图像融合模型。为满足一定的稀疏性,将源图像在过完备二维离散余弦变换(DCT)字典上进行稀疏表示,并通过随机观测得到待融合的观测值;在每一图像块上采用基于标准差的方法自适应地计算融合权值,加权合成融合后的观测值,然后利用改进步长的梯度追踪算法求解稀疏系数,得到最终融合图像。实验结果表明:与传统方法相比,提出的融合模型在减少计算量和存储容量的同时,能更好地从源图像中提取信息,获得效果较好的融合图像。  相似文献   

14.
In this paper, we propose a coding algorithm for still images using vector quantization (VQ) and fractal approximation, in which low-frequency components of an input image are approximated by VQ, and its residual is coded by fractal mapping. The conventional fractal coding algorithms indirectly used the gray patterns of an original image with contraction mapping, whereas the proposed fractal coding method employs an approximated and then decimated image as a domain pool and uses its gray patterns. Thus, the proposed algorithm utilizes fractal approximation without the constraint of contraction mapping. For approximation of an original image, we employ the discrete cosine transform (DCT) rather than conventional polynomial-based transforms. In addition, for variable blocksize segmentation, we use the fractal dimension of a block that represents the roughness of the gray surface of a region. Computer simulations with several test images show that the proposed method shows better performance than the conventional fractal coding methods for encoding still pictures.  相似文献   

15.
This work presents a new approach and an algorithm for binary image representation, which is applied for the fast and efficient computation of moments on binary images. This binary image representation scheme is called image block representation, since it represents the image as a set of nonoverlapping rectangular areas. The main purpose of the image block representation process is to provide an efficient binary image representation rather than the compression of the image. The block represented binary image is well suited for fast implementation of various processing and analysis algorithms in a digital computing machine. The two-dimensional (2-D) statistical moments of the image may be used for image processing and analysis applications. A number of powerful shape analysis methods based on statistical moments have been presented, but they suffer from the drawback of high computational cost. The real-time computation of moments in block represented images is achieved by exploiting the rectangular structure of the blocks.  相似文献   

16.
This paper addresses the use of independent component analysis (ICA) for image compression. Our goal is to study the adequacy (for lossy transform compression) of bases learned from data using ICA. Since these bases are, in general, non-orthogonal, two methods are considered to obtain image representations: matching pursuit type algorithms and orthogonalization of the ICA bases followed by standard orthogonal projection.Several coder architectures are evaluated and compared, using both the usual SNR and a perceptual quality measure called picture quality scale. We consider four classes of images (natural, faces, fingerprints, and synthetic) to study the generalization and adaptation abilities of the data-dependent ICA bases. In this study, we have observed that: bases learned from natural images generalize well to other classes of images; bases learned from the other specific classes show good specialization. For example, for fingerprint images, our coders perform close to the special-purpose WSQ coder developed by the FBI. For some classes, the visual quality of the images obtained with our coders is similar to that obtained with JPEG2000, which is currently the state-of-the-art coder and much more sophisticated than a simple transform coder.We conclude that ICA provides a excellent tool for learning a coder for a specific image class, which can even be done using a single image from that class. This is an alternative to hand tailoring a coder for a given class (as was done, for example, in the WSQ for fingerprint images). Another conclusion is that a coder learned from natural images acts like an universal coder, that is, generalizes very well for a wide range of image classes.  相似文献   

17.
Underwater image enhancement has attracted much attention due to the rise of marine resource development in recent years. Benefit from the powerful representation capabilities of Convolution Neural Networks(CNNs), multiple underwater image enhancement algorithms based on CNNs have been proposed in the past few years. However, almost all of these algorithms employ RGB color space setting, which is insensitive to image properties such as luminance and saturation. To address this problem, we proposed Underwater Image Enhancement Convolution Neural Network using 2 Color Space (UICE^2-Net) that efficiently and effectively integrate both RGB Color Space and HSV Color Space in one single CNN. To our best knowledge, this method is the first one to use HSV color space for underwater image enhancement based on deep learning. UIEC^2-Net is an end-to-end trainable network, consisting of three blocks as follow: a RGB pixel-level block implements fundamental operations such as denoising and removing color cast, a HSV global-adjust block for globally adjusting underwater image luminance, color and saturation by adopting a novel neural curve layer, and an attention map block for combining the advantages of RGB and HSV block output images by distributing weight to each pixel. Experimental results on synthetic and real-world underwater images show that the proposed method has good performance in both subjective comparisons and objective metrics. The code is available at https://github.com/BIGWangYuDong/UWEnhancement.  相似文献   

18.
This paper proposes a new algorithm to integrate image registration into image super-resolution (SR). Image SR is a process to reconstruct a high-resolution (HR) image by fusing multiple low-resolution (LR) images. A critical step in image SR is accurate registration of the LR images or, in other words, effective estimation of motion parameters. Conventional SR algorithms assume either the estimated motion parameters by existing registration methods to be error-free or the motion parameters are known a priori. This assumption, however, is impractical in many applications, as most existing registration algorithms still experience various degrees of errors, and the motion parameters among the LR images are generally unknown a priori. In view of this, this paper presents a new framework that performs simultaneous image registration and HR image reconstruction. As opposed to other current methods that treat image registration and HR reconstruction as disjoint processes, the new framework enables image registration and HR reconstruction to be estimated simultaneously and improved progressively. Further, unlike most algorithms that focus on the translational motion model, the proposed method adopts a more generic motion model that includes both translation as well as rotation. An iterative scheme is developed to solve the arising nonlinear least squares problem. Experimental results show that the proposed method is effective in performing image registration and SR for simulated as well as real-life images.  相似文献   

19.
Saliency detection in the compressed domain for adaptive image retargeting   总被引:2,自引:0,他引:2  
Saliency detection plays important roles in many image processing applications, such as regions of interest extraction and image resizing. Existing saliency detection models are built in the uncompressed domain. Since most images over Internet are typically stored in the compressed domain such as joint photographic experts group (JPEG), we propose a novel saliency detection model in the compressed domain in this paper. The intensity, color, and texture features of the image are extracted from discrete cosine transform (DCT) coefficients in the JPEG bit-stream. Saliency value of each DCT block is obtained based on the Hausdorff distance calculation and feature map fusion. Based on the proposed saliency detection model, we further design an adaptive image retargeting algorithm in the compressed domain. The proposed image retargeting algorithm utilizes multioperator operation comprised of the block-based seam carving and the image scaling to resize images. A new definition of texture homogeneity is given to determine the amount of removal block-based seams. Thanks to the directly derived accurate saliency information from the compressed domain, the proposed image retargeting algorithm effectively preserves the visually important regions for images, efficiently removes the less crucial regions, and therefore significantly outperforms the relevant state-of-the-art algorithms, as demonstrated with the in-depth analysis in the extensive experiments.  相似文献   

20.
一种高容量盲检测图像指纹算法   总被引:1,自引:0,他引:1  
针对共谋攻击,提出了一种高容量的空域盲检测图像指纹算法.用抗共谋攻击码(ACC)调制正交基向量生成二值指纹,根据指纹状态对图像灰度值量化嵌入指纹.追踪共谋者时,首先根据待检图像像素灰度所属区间提取指纹,然后计算其与正交基向量的内积得到一新向量,对其用软阈值方式进行处理,最后追踪共谋者.指纹嵌入提取采用量化方式,是一种盲...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号