首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
We present a technique implementing space-variant filtering of an image, with kernels belonging to a given family, in time independent of the size and shape of the filter kernel support. The essence of our method is efficient approximation of these kernels, belonging to an infinite family governed by a small number of parameters, as a linear combination of a small number k of “basis” kernels. The accuracy of this approximation increases with k, and requires O(k) storage space. Any kernel in the family may be applied to the image in O(k) time using precomputed results of the application of the basis kernels. Performing linear combinations of these values with appropriate coefficients yields the desired result. A trade off between algorithm efficiency and approximation quality is obtained by adjusting k. The basis kernels are computed using singular value decomposition, distinguishing this from previous techniques designed to achieve a similar effect. We illustrate by applying our methods to the family of elliptic Gaussian kernels, a popular choice for filtering warped images.  相似文献   

2.
Fast Gauss Bilateral Filtering   总被引:1,自引:0,他引:1  
In spite of high computational complexity, the bilateral filter and its modifications and extensions have recently become very popular image and shape processing tools. In this paper, we propose a fast and accurate approximation of the bilateral filter. Our approach combines a dimension elevation trick with a Fast Gauss Transform. First we represent the bilateral filter as a convolution in a high dimensional space. Then the convolution is efficiently approximated by using space partitioning and Gaussian function expansions. Advantages of our approach include linear computational complexity, user-specified precision, and an ability to process high dimensional and non-uniformly sampled data. We demonstrate capabilities of the approach by considering its applications to the image and volume denoising and high-dynamic-range tone mapping problems.  相似文献   

3.
The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and fast versions have been proposed. Unfortunately, little is known about the accuracy of such accelerations. In this paper, we propose a new signal-processing analysis of the bilateral filter which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using downsampling in space and intensity. This affords a principled expression of accuracy in terms of bandwidth and sampling. The bilateral filter can be expressed as linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive criteria for downsampling the key operations and achieving important acceleration of the bilateral filter. We show that, for the same running time, our method is more accurate than previous acceleration techniques. Typically, we are able to process a 2 megapixel image using our acceleration technique in less than a second, and have the result be visually similar to the exact computation that takes several tens of minutes. The acceleration is most effective with large spatial kernels. Furthermore, this approach extends naturally to color images and cross bilateral filtering.  相似文献   

4.
We present a novel combined post-filtering (CPF) method to improve the accuracy of optical flow estimation. Its attractive advantages are that outliers reduction is attained while discontinuities are well preserved, and occlusions are partially handled. Major contributions are the following: First, the structure tensor (ST) based edge detection is introduced to extract flow edges. Moreover, we improve the detection performance by extending the traditional 2D spatial edge detector into spatial-scale 3D space, and also using a gradient bilateral filter (GBF) to replace the linear Gaussian filter to construct a multi-scale nonlinear ST. GBF is useful to preserve discontinuity but it is computationally expensive. A hybrid GBF and Gaussian filter (HGBGF) approach is proposed by means of a spatial-scale gradient signal-to-noise ratio (SNR) measure to solve the low efficiency issue. Additionally, a piecewise occlusion detection method is used to extract occlusions. Second, we apply a CPF method, which uses a weighted median filter (WMF), a bilateral filter (BF) and a fast median filter (MF), to post-smooth the detected edges and occlusions, and the other flat regions of the flow field, respectively. Benchmark tests on both synthetic and real sequences demonstrate the effectiveness of our method.  相似文献   

5.
Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in \(L_\infty \) norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.  相似文献   

6.
提出了一种求解曲线的误差约束多边形近似问题的遗传算法.其主要思想是:1)采用变长染色体编码机制,以减少存储空间和计算时间的消耗;2)针对问题的特点,提出了一种新的杂交算子——基因消去杂交,以尽可能地消去染色体上的冗余基因,从而提高算法的寻优能力;3)采用染色体修复策略处理遗传操作产生的不可行解,该策略通过迭代地向染色体追加有价值的候选基因来实现染色体的修复,并提出一种对染色体的候选基因进行评估的机制.通过实验评估并与其他遗传算法进行比较,结果表明,提出的算法性能更优越.  相似文献   

7.
By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parameters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms working with data only in tensor formats maintaining the same small number of representation parameters—by the price of all results of computation to be contaminated by approximation (recompression) to occur in each operation. Since approximation time is crucial and depends on tensor formats in use, in this paper we discuss which are best suitable to make recompression inexpensive and reliable. We present fast recompression procedures with sublinear complexity with respect to the size of data and propose methods for basic linear algebra operations with all matrix operands in the Tucker format, mostly through calls to highly optimized level-3 BLAS/LAPACK routines. We show that for three-dimensional tensors the canonical format can be avoided without any loss of efficiency. Numerical illustrations are given for approximate matrix inversion via proposed recompression techniques.   相似文献   

8.
Low-rank structures play important roles in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regularization, and thus enjoy more flexibility to handle data with nonlinear structures. As applications, we demonstrate the proposed regularization to classical inverse problems in image sciences and data sciences including image inpainting, image super-resolution, X-ray computer tomography image reconstruction and semi-supervised learning. We conduct intensive numerical experiments in several image restoration problems and a semi-supervised learning problem of classifying handwritten digits using the MINST data. Our numerical tests demonstrate the effectiveness of the proposed methods and illustrate that the new regularization methods produce outstanding results by comparing with many existing methods.  相似文献   

9.
We present a generalization of the convolution-based variational image registration approach, in which different regularizers can be implemented by conveniently exchanging the convolution kernel, even if it is nonseparable or nonstationary. Nonseparable kernels pose a challenge because they cannot be efficiently implemented by separate 1D convolutions. We propose to use a low-rank tensor decomposition to efficiently approximate nonseparable convolution. Nonstationary kernels pose an even greater challenge because the convolution kernel depends on, and needs to be evaluated for, every point in the image. We propose to pre-compute the local kernels and efficiently store them in memory using the Tucker tensor decomposition model. In our experiments we use the nonseparable exponential kernel and a nonstationary landmark kernel. The exponential kernel replicates desirable properties of elastic image registration, while the landmark kernel incorporates local prior knowledge about corresponding points in the images. We examine the trade-off between the computational resources needed and the approximation accuracy of the tensor decomposition methods. Furthermore, we obtain very smooth displacement fields even in the presence of large landmark displacements.  相似文献   

10.
目的 基于现有的研究提出一种细节感知的纹理去除算法,在去除图像纹理时,能够很好地保持图像的结构信息,尤其是诸如细长结构和边角信息等在其他方法中容易被模糊化的特殊细节。方法 首先,本文提出一种能够识别细长结构的结构检测方法,对细长结构进行检测并增强其结构特征。其次,为了估计每个像素点的最优滤波核尺度,改进原有的相对总变差模型,多方向寻找最小相对总变差,使它能够更好地区分纹理和边界,并且将边角信息从纹理中区分出来。然后,将检测出来的细长结构归一到改进的相对总变差的度量尺度上,估计滤波核尺度,生成引导滤波图像。这样就能够在平坦或有纹理的区域运用大尺度的滤波核,并在结构边缘和边角附近减小滤波核。最后,通过联合双边滤波器得到纹理去除后的图像。结果 实验测试了马赛克图像和艺术画作,对比了相对总变差和尺度敏感的结构保护滤波等方法,本文方法在去除纹理的同时保留了细长结构和边角细节,并且具有良好的普适性和鲁棒性。利用本文算法处理一幅含10万像素的图像,仅通过一次迭代计算就能够去除大量纹理且效果优于已有的方法,本算法的计算时间为3.37 s,其他算法为0.07~3.29 s。结论 本文设计的纹理滤波器不仅在保持诸如细长结构方面的性能更好,而且使纹理去除后的图像在边角细节处更尖锐,为图像的后续处理提供了一种强有力的图像预处理方式。  相似文献   

11.
Kernel-based methods are effective for object detection and recognition. However, the computational cost when using kernel functions is high, except when using linear kernels. To realize fast and robust recognition, we apply normalized linear kernels to local regions of a recognition target, and the kernel outputs are integrated by summation. This kernel is referred to as a local normalized linear summation kernel. Here, we show that kernel-based methods that employ local normalized linear summation kernels can be computed by a linear kernel of local normalized features. Thus, the computational cost of the kernel is nearly the same as that of a linear kernel and much lower than that of radial basis function (RBF) and polynomial kernels. The effectiveness of the proposed method is evaluated in face detection and recognition problems, and we confirm that our kernel provides higher accuracy with lower computational cost than RBF and polynomial kernels. In addition, our kernel is also robust to partial occlusion and shadows on faces since it is based on the summation of local kernels.  相似文献   

12.
《图学学报》2018,39(2):209
针对基于尺度不变特征变换(SIFT)的合成孔径雷达(SAR)与可见光图像配准存在耗 时长、精度不高的问题,提出了SIFT 与快速近似最近邻搜索(FLANN)相结合的配准算法。首 先,针对SAR 图像存在的相干斑噪声做双边滤波(BF),在去噪的同时能够保护图像的边缘避免 被高斯函数模糊。其次,在高斯差分尺度空间检测特征点并生成SIFT 特征描述向量,利用 FLANN 算法实现高维向量空间中的快速匹配。最后,采用改进的抽样一致算法(PROSAC)剔除 误匹配进一步提高匹配正确率。实验结果表明该算法在配准的精度和速度上都优于原始的SIFT 算法。  相似文献   

13.
为了再现雾霾天气下可见光图像的清晰场景,有效抑制雾霾退化造成的图像对比度、清晰度下降,本文提出了一种基于改进的双边滤波器的快速有效的去雾新方法.该方法引进了本文首次发现的简洁高效的“类高斯核”,代替传统双边滤波器的高斯核.改进的双边滤波器具有很好边缘保持特性,用该滤波器来准确优化雾天大气传输率的估计,大大提高了计算效率;在大气光值估计中,对暗通道和原图两个区间亮度最大值,进行加权平均,精确的估计出雾天大气光值.本文算法具有很快的处理速度,能有效提高复原图像的清晰度和对比度,获得较好的图像颜色.  相似文献   

14.
在各向异性的物体中,高光被视为是漫反射分量以及镜面反射分量的一种线性组合。单幅图像的高光去除是计算机视觉中一项非常有挑战性的课题。很多方法试图将漫反射分量、镜面反射分量进行分离,然而这些方法往往需要图像分割等预处理过程,方法鲁棒性较差且比较耗时。基于双边滤波器设计了一种高效的高光消除方法,该方法利用最大漫反射色度存在着局部平滑这一性质,使用双边滤波器对色度的最大取值进行传播与扩散,从而完成整幅图像高光去除。方法采用一种加速策略对双边滤波器进行速度优化,与目前流行的方法相比,有效提升了方法的执行效率。与传统方法相比,该方法高光去除效果更好,处理速度更快,非常适用于一些实时应用的场合。  相似文献   

15.
李知菲  陈源 《计算机应用》2014,34(8):2231-2234
针对Kinect镜头采集的深度图像一般有噪声和黑洞现象,直接应用于人体动作跟踪和识别等系统中效果差的问题,提出一种基于联合双边滤波器的深度图像滤波算法。算法利用联合双边滤波原理,将Kinect镜头同一时刻采集的深度图像和彩色图像作为输入,首先,用高斯核函数计算出深度图像的空间距离权值和RGB彩色图像的灰度权值;然后,将这两个权值相乘得到联合滤波权值,并利用快速高斯变换替换高斯核函数,设计出联合双边滤波器;最后,用此滤波器的滤波结果与噪声图像进行卷积运算实现Kinect深度图像滤波。实验结果表明,所提算法应用在人体动作识别和跟踪系统后,可显著提高在背景复杂场景中的抗噪能力,识别正确率提高17.3%,同时所提算法的平均耗时为371ms,远低于同类算法。所提算法保持了联合双边滤波平滑保边的优点,由于引入彩色图像作为引导图像,去噪的同时也能对黑洞进行修补,因此该算法在Kinect深度图像上的去噪和修复效果优于经典的双边滤波算法和联合双边滤波算法,且实时性强。  相似文献   

16.
Large scale nonlinear support vector machines (SVMs) can be approximated by linear ones using a suitable feature map. The linear SVMs are in general much faster to learn and evaluate (test) than the original nonlinear SVMs. This work introduces explicit feature maps for the additive class of kernels, such as the intersection, Hellinger's, and χ2 kernels, commonly used in computer vision, and enables their use in large scale problems. In particular, we: 1) provide explicit feature maps for all additive homogeneous kernels along with closed form expression for all common kernels; 2) derive corresponding approximate finite-dimensional feature maps based on a spectral analysis; and 3) quantify the error of the approximation, showing that the error is independent of the data dimension and decays exponentially fast with the approximation order for selected kernels such as χ2. We demonstrate that the approximations have indistinguishable performance from the full kernels yet greatly reduce the train/test times of SVMs. We also compare with two other approximation methods: Nystrom's approximation of Perronnin et al., which is data dependent, and the explicit map of Maji and Berg for the intersection kernel, which, as in the case of our approximations, is data independent. The approximations are evaluated on a number of standard data sets, including Caltech-101, Daimler-Chrysler pedestrians, and INRIA pedestrians.  相似文献   

17.
Zhong  Yuanhong  Zhang  Jing  Zhou  Zhaokun  Cheng  Xinyu  Huang  Guan  Li  Qiang 《Multimedia Tools and Applications》2021,80(5):7433-7450

In recent years, block-based compressive sensing (BCS) has been extensively studied because it can reduce computational complexity and data storage by dividing the image into smaller patches, but the performance of the reconstruction algorithm is not satisfactory. In this paper, a new reconstruction model for image and video is proposed. The model makes full use of spatio-temporal correlation and utilizes low-rank tensor approximation to improve the quality of the reconstructed image and video. For image recovery, the proposed model obtains a low-rank approximation of a tensor formed by non-local similar patches, and improves the reconstruction quality from a spatial perspective by combining non-local similarity and low-rank property. For video recovery, the reconstruction process is divided into two phases. In the first phase, each frame of the video sequence is regarded as an independent image to be reconstructed by taking advantage of spatial property. The second phase performs tensor approximation through searching similar patches within frames near the target frame, to achieve reconstruction by putting the spatio-temporal correlation into full play. The resulting model is solved by an efficient Alternating Direction Method of Multipliers (ADMM) algorithm. A series of experiments show that the quality of the proposed model is comparable to the current state-of-the-art recovery methods.

  相似文献   

18.
In this paper, a novel multiscale geometrical analysis called the multiscale directional bilateral filter (MDBF) which introduces the nonsubsampled directional filter bank into the multiscale bilateral filter is proposed. Through combining the characteristic of preserving edge of the bilateral filter with the ability of capturing directional information of the directional filter bank, the MDBF can better represent the intrinsic geometrical structure of images. The MDBF, which is a multiscale, multidirectional and shift-invariant image decomposition scheme, is used to fuse multisensor images in this paper. The source images are first decomposed into the directional detail subbands and the approximation subbands via the MDBF. Then, the directional detail subbands and the approximation subbands are fused according to the given fusion rule, respectively. Finally, the inverse MDBF is applied to the fused subbands to obtain the fused image. Experimental results over visible and infrared images and medical images demonstrate the superiority of our method compared with conventional methods in terms of visual inspection and objective measures.  相似文献   

19.
雾天条件下获取的图像会有低对比度和低场景可见度的问题,一些去雾算法会出现halo效应现象。基于暗原色先验理论的去雾算法,花费了大量的时间在透射率的优化问题上。为此,提出一种结合暗通道去雾算法的景深优化的图像复原方法,在图像景深边缘和非景深边缘分别采用不同的模板处理得到暗图像,进而得到粗略透射率,双边滤波器进行修复粗略透射率。该算法能够有效地去除图像中的雾气,和软件抠图法相比较,不仅有效降低了halo效应,并且大大地减少了透射率处理时间,提高了处理效率。实验表明该算法的时间复杂度与图像尺寸成线性关系,相比于传统算法在计算速度上有一定提高,保障了图像处理的实时性。  相似文献   

20.
The recently proposed Bilateral Filter Luminance Proportional (BFLP) method extracts the high-frequency details from panchromatic (Pan) image via a multiscale bilateral filter and adds them proportionally to the multispectral (MS) image. Although this approach seems similar to other multiresolution (MRA) based schemes such as Additive Wavelet proportional Luminance (AWLP) or Generalized Laplacian (GLP) methods, multiscale bilateral filter obtains the detail planes to be injected to MS image by the combination of two Gaussian kernels controlling the transfer of details and performing successively in spatial and range domains, thus it has two parameters to be defined, namely spatial and range parameters. Since the parameter determination step considerably affects the efficiency of the method, in this paper we propose a single parameter bilateral filter by approximating the Gaussian kernel with the bicubic kernel of à trous wavelet transform (ATWT) or modulation transfer function (MTF). Moreover, we adopt an adaptive injection scheme where the range parameter is determined adaptively so as to follow the statistics of the images to be fused. The pansharpening results are compared with ATWT-based methods, as well as some state-of-the-art methods and BFLP. The visual and quantitative comparisons for Système Pour l’Observation de la Terre 7 (SPOT 7) and Pléiades 1A images, field studies supported with UAV (Unmanned Aerial Vehicle) images and digitization results of the chosen areas in Istanbul Technical University (ITU) Maslak campus confirm the superiority of the proposed detail injection approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号