首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对解密一个加密图像耗时长的问题,基于混沌帐篷映射加密算法提出一种图像加密预览算法。算法首先采用图像识别与图像分割技术选择图像关键区域或采用图像缩略图技术生成原图像的预览图像;然后利用混沌帐篷映射加密算法分别对原图像及其预览图像进行加密;最后将加密后的原图像和预览图像进行整合生成加密图像。该算法在解密原图像前先解密预览图像,实现加密图像预览功能。实验表明,该算法可以对加密图像进行解密前预览,预览效果好,耗时短。  相似文献   

2.
Image Fusion for Enhanced Visualization: A Variational Approach   总被引:3,自引:0,他引:3  
We present a variational model to perform the fusion of an arbitrary number of images while preserving the salient information and enhancing the contrast for visualization. We propose to use the structure tensor to simultaneously describe the geometry of all the inputs. The basic idea is that the fused image should have a structure tensor which approximates the structure tensor obtained from the multiple inputs. At the same time, the fused image should appear ‘natural’ and ‘sharp’ to a human interpreter. We therefore propose to combine the geometry merging of the inputs with perceptual enhancement and intensity correction. This is performed through a minimization functional approach which implicitly takes into account a set of human vision characteristics.  相似文献   

3.
A motion deblurring algorithm is proposed to enhance the quality of restoration based on the point spread function (PSF) identification in frequency spectrum. An improved blur angle identification algorithm characterized by bilateral-piecewise estimation strategy and the membership function method is presented by formulating the edges of the central bright stripe. Subsequently, the subpixel level image generated with bilinear interpolation is employed in the blur length estimation by calculating the distance between two adjacent dark strips. Through comparison with the existing algorithms, experimental results demonstrate that the proposed PSF estimation scheme could not only achieve higher accuracy for the blur angle and the blur length, but also produce more impressive restoration results. Furthermore, the robustness of our method is also validated in different noisy situations.  相似文献   

4.
Perceptual grouping of segmented regions in color images   总被引:3,自引:0,他引:3  
Jiebo  Cheng-en 《Pattern recognition》2003,36(12):2781-2792
Image segmentation is often the first yet important step of an image understanding system. However, general-purpose image segmentation algorithms that do not rely on specific object models still cannot produce perceptually coherent segmentation of regions at a level comparable to humans. Over-segmentation and under-segmentation have plagued the research community in spite of many significant advances in the field. Therefore, grouping of segmented region plays a significant role in bridging image segmentation and high-level image understanding. In this paper, we focused on non-purposive grouping (NPG), which is built on general expectations of a perceptually desirable segmentation as opposed to any object specific models, such that the grouping algorithm is applicable to any image understanding application. We propose a probabilistic model for the NPG problem by defining the regions as a Markov random field (MRF). A collection of energy functions is used to characterize desired single-region properties and pair-wise region properties. The single-region properties include region area, region convexity, region compactness, and color variances in one region. The pair-wise properties include color mean differences between two regions; edge strength along the shared boundary; color variance of the cross-boundary area; and contour continuity between two regions. The grouping process is implemented by a greedy method using a highest confidence first (HCF) principle. Experiments have been performed on hundreds of color photographic images to show the effectiveness of the grouping algorithm using a set of fixed parameters.  相似文献   

5.
6.
Due to the huge gap between the high dynamic range of natural scenes and the limited (low) range of consumer-grade cameras, a single-shot image can hardly record all the information of a scene. Multi-exposure image fusion (MEF) has been an effective way to solve this problem by integrating multiple shots with different exposures, which is in nature an enhancement problem. During fusion, two perceptual factors including the informativeness and the visual realism should be concerned simultaneously. To achieve the goal, this paper presents a deep perceptual enhancement network for MEF, termed as DPE-MEF. Specifically, the proposed DPE-MEF contains two modules, one of which responds to gather content details from inputs while the other takes care of color mapping/correction for final results. Both extensive experimental results and ablation studies are conducted to show the efficacy of our design, and demonstrate its superiority over other state-of-the-art alternatives both quantitatively and qualitatively. We also verify the flexibility of the proposed strategy on improving the exposure quality of single images. Moreover, our DPE-MEF can fuse 720p images in more than 60 pairs per second on an Nvidia 2080Ti GPU, making it attractive for practical use. Our code is available at https://github.com/dongdong4fei/DPE-MEF.  相似文献   

7.
时空过程可视化具有重要的研究价值与意义。以时空过程为目标,在分析时空数据分类与时空过程的基础上,综合考虑时空视觉变量、感知特征、可视化分类与表现形式等要素,提出时空过程可视化模型S-TProc_VisModel。该模型包含模型输入、视觉变量、视觉感知特征、时空过程可视化形式、模型输出五个部分,可以划分为时空数据、时空过程、视觉变量、视觉感知、可视化类型、表现形式、可视化成果七个层次。利用该模型不仅能有效表达时空要素的位置、形态,还能描述事件中的时空过程及时态变化。经实验证明,S-TProc_VisModel模型能够充分展示时空过程内容,并能形象完整地展示过程细节。  相似文献   

8.
Huang and Hsu (1981) describe an image sequence enhancement algorithm based on computing motion vectors between successive frames and using these vectors to determine the correspondence between pixels for frame averaging. In this note, we demonstrate that it may be sufficient to use only the components of the motion vectors in the gradient direction (called the normal components) to perform the enhancement.  相似文献   

9.
This study follows the direct approach to image contrast enhancement, which changes the image contrast at each its pixel and is more effective than the indirect approach that deals with image histograms. However, there are only few studies following the direct approach because, by its nature, it is very complex. Additionally, it is difficult to develop an effective method since it is required to keep a balance in maintaining local and global image features while changing the contrast at each individual pixel. Moreover, raw images obtained from many sources randomly influenced by many external factors can be considered as fuzzy uncertain data. In this context, we propose a novel method to apply and immediately handle expert fuzzy linguistic knowledge of image contrast enhancement to simulate human capability in using natural language. The formalism developed in the study is based on hedge algebras considered as a theory, which can immediately handle linguistic words of variables. This allows the proposed method to produce an image contrast intensificator from a given expert linguistic rule base. A technique to preserve global as well as local image features is proposed based on a fuzzy clustering method, which is applied for the first time in this field to reveal region image features of raw images. The projections of the obtained clusters on each channel are suitably aggregated to produce a new channel image considered as input of the pixelwise defined operators proposed in this study. Many experiments are performed to demonstrate the effect of the proposed method versus the counterparts considered.  相似文献   

10.
In this work, a method to enhance images based on a new artificial life model is presented. The model is inspired on the behavior of a herbivore organism, when this organism is in a certain environment and selects its food. This organism travels through the image iteratively, selecting the more suitable food and eating parts of it in each iteration. The path that the organism travels through in the image is defined by a priori knowledge about the environment and how it should move in it. Here, we modeled the control and perception centers of the organism, as well as the simulation of its actions and effects on the environment. To demonstrate the efficiency of our method quantitative and qualitative results of the enhancement of synthetic and real images with low contrast and different levels of noise are presented. Obtained results confirm the ability of the new artificial life model for improving the contrast of the objects in the input images.  相似文献   

11.
付炜 《计算机应用》2004,24(12):1-3
基于影像特征级数据融合的遥感图像重构是在突出目标地物的空间结构和纹理特征情况下的信息融合。在数字图像小波多分辨率分析理论基础上,采用小波变换方法对高分辨遥感图像的目标地物边缘进行信息增强,然后与多光谱遥感图像进行特征信息融合。在融合过程中,首先对多光谱图像中的R、G、B三个波段的图像进行小波分解,得到相应的低频图像,然后对特征增强后的高分辨率图像进行小波分解,再将分解后的高频图像分别与低频图像进行融合,最后经RGB合成为彩色图像。该方法既改善了图像的清晰度和分辨率,同时也保留了原图像的光谱信息。通过融合实验验证了上述结论。  相似文献   

12.
Feature extraction and image segmentation (FEIS) are two primary goals of almost all image-understanding systems. They are also the issues at which we look in this paper. We think of FEIS as a multilevel process of grouping and describing at each level. We emphasize the importance of grouping during this process because we believe that many features and events in real images are only perceived by combining weak evidence of several organized pixels or other low-level features. To realize FEIS based on this formulation, we must deal with such problems as how to discover grouping rules, how to develop grouping systems to integrate grouping rules, how to embed grouping processes into FEIS systems, and how to evaluate the quality of extracted features at various levels. We use self-organizing networks to develop grouping systems that take the organization of human visual perception into consideration. We demonstrate our approach by solving two concrete problems: extracting linear features in digital images and partitioning color images into regions. We present the results of experiments on real images.  相似文献   

13.
Both image enhancement and image segmentation are important pre-processing steps for various image processing fields including autonomous navigation, remote sensing, computer vision, and biomedical image analysis. Both methods have their merits and their short comings. It then becomes obvious to ask the question: is it possible to develop a new better image enhancement method which has the key elements from both segmentation and image enhancement techniques? The choice of the threshold level is a key task in image segmentation. There are other challenges of image segmentation. For example, it is very difficult to perform the image segmentation in poor data such as shadows and noise. Recently, a homothetic curves Fibonacci-based cross sections thresholding has been developed for the de-noising purposes. Is it possible to develop a new image cross sections thresholding method, which can be used for both segmentation and image enhancement purposes? This paper a) describes a unified approach for signal thresholding, b) extends cross sections concept by generating and using a new class of monotonic, piecewise linear, sequences (slowly or faster growing than Fibonacci numbers) of numbers; c) uses the extended sections concept to the image enhancement and segmentation applications. Extensive experimental evaluation demonstrates that the newly proposed monotonic sequences have great potential in image processing applications, including image segmentation and image enhancement applications. Moreover, study has shown that the generalized cross techniques are invariant under morphological transformations such as erosion, dilation, and median, able to be described analytically, can be implemented by using the look up table methods.  相似文献   

14.
A feature-based robust digital image watermarking against geometric attacks   总被引:2,自引:0,他引:2  
Based on scale space theory and an image normalization technique, a new feature-based image watermarking scheme robust to general geometric attacks is proposed in this paper. First, the Harris–Laplace detector is utilized to extract steady feature points from the host image; then, the local feature regions (LFR) are ascertained adaptively according to the characteristic scale theory, and they are normalized by an image normalization technique; finally, according to the predistortion compensation theory, several copies of the digital watermark are embedded into the nonoverlapped normalized LFR by comparing the DFT mid-frequency magnitudes. Experimental results show that the proposed scheme is not only invisible and robust against common signals processing methods such as median filtering, sharpening, noise adding, and JPEG compression etc., but also robust against the general geometric attacks such as rotation, translation, scaling, row or column removal, shearing, local geometric distortion and combination attacks etc.  相似文献   

15.
In this paper, we propose an approach to interactive navigation in image collections. As structured groups are more appealing to users than flat image collections, we propose an image clustering algorithm, with an incremental version that handles time-varying collections. A 3D graph-based visualization technique reflects the classification state. While this classification visualization is itself interactive, we show how user feedback may assist the classification, thus enabling a user to improve it.  相似文献   

16.
传统图像增强算法在增强对比度的同时,也很大地提升图像噪声,需要对图像进行降噪处理。小波增强方法兼顾图像信号的空域和频域特性,但没有充分考虑到视觉的非线性特性。针对现有图像增强技术的这一缺陷,在分析小波变换对噪声影响规律的基础上,结合小波多尺度的特性,提出了一种基于小波多尺度的图像增强新算法,利用不同尺度上的小波系数间的相关性和小波分析的时频局部化特性来有效区分噪声和图像信息,有效改善了图像增强过程中的噪声放大问题。  相似文献   

17.
为了提高哈希算法的分类性能和运行效率,提出一种基于梯度变化特征和能量特征的图像哈希算法.首先,对输入图像进行预处理操作形成二次图像,利用Sobel算子对二次图像的红色通道、绿色通道、蓝色通道图像进行x轴和y轴的梯度值计算,将各分量图像的梯度值进行相加得到最终的梯度图像;然后将梯度图像幅值的多方向变化信息作为图像的梯度特征,所有图像子块的能量值作为图像的能量特征;最后将图像的梯度特征与能量特征联合起来并置乱得到最终的哈希序列.实验结果表明:所提算法的区别性和鲁棒性可以达到较好的权衡;与最新的以及较好的哈希算法相比,该算法具有最好的ROC曲线和最短的运行时间,平均运行时间为0.0242 s,并且在拷贝检测的对比实验中,所提算法的查全率—查准率曲线最好.  相似文献   

18.
A good objective metric of image quality assessment (IQA) should be consistent with the subjective judgment of human beings. In this paper, a four-stage perceptual approach for full reference IQA is presented. In the first stage, the visual features are extracted by 2-D Gabor filter that has the excellent performance of modeling the receptive fields of simple cells in the primary visual cortex. Then in the second stage, the extracted features are post-processed by the divisive normalization transform to reflect the nonlinear mechanisms in human visual systems. In the third stage, mutual information between the visual features of the reference and distorted images is employed to measure the visual quality. And in the last pooling stage, the mutual information is converted to the final objective quality score. Experimental results show that the proposed metic has a high correlation with the subjective assessment and outperforms other state-of-the-art metrics.  相似文献   

19.
A good objective metric of image quality assessment (IQA) should be consistent with the subjective judgment of human beings. In this paper, a four-stage perceptual approach for full reference IQA is presented. In the first stage, the visual features are extracted by 2-D Gabor filter that has the excellent performance of modeling the receptive fields of simple cells in the primary visual cortex. Then in the second stage, the extracted features are post-processed by the divisive normalization transform to reflect the nonlinear mechanisms in human visual systems. In the third stage, mutual information between the visual features of the reference and distorted images is employed to measure the visual quality. And in the last pooling stage, the mutual information is converted to the final objective quality score. Experimental results show that the proposed metic has a high correlation with the subjective assessment and outperforms other state-of-the-art metrics.  相似文献   

20.
This paper presents a new method for edge-preserving color image denoising based on the tensor voting framework, a robust perceptual grouping technique used to extract salient information from noisy data. The tensor voting framework is adapted to encode color information through tensors in order to propagate them in a neighborhood by using a specific voting process. This voting process is specifically designed for edge-preserving color image denoising by taking into account perceptual color differences, region uniformity and edginess according to a set of intuitive perceptual criteria. Perceptual color differences are estimated by means of an optimized version of the CIEDE2000 formula, while uniformity and edginess are estimated by means of saliency maps obtained from the tensor voting process. Measurements of removed noise, edge preservation and undesirable introduced artifacts, additionally to visual inspection, show that the proposed method has a better performance than the state-of-the-art image denoising algorithms for images contaminated with CCD camera noise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号