首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Scale is a widely used notion in image analysis that evolved in the form of scale-space theory whose key idea is to represent and analyze an image at various resolutions. Recently, the notion of localized scale—a space-variant resolution scheme—has drawn significant research interest. Previously, we reported local morphometric scale using a spherical model. A major limitation of the spherical model is that it ignores structure orientation and anisotropy, and therefore fails to be optimal in many imaging applications including biomedical ones where structures are inherently anisotropic and have mixed orientations. Here, we introduce a new concept called “tensor scale”—a local morphometric parameter yielding a unified representation of structure size, orientation, and anisotropy. Also, a few applications of tensor scale in computer vision and image analysis, especially, in image filtering are illustrated. At any image point, its tensor scale is the parametric representation of the largest ellipse (in 2D) or ellipsoid (in 3D) centered at that point and contained in the same homogeneous region. An algorithmic framework to compute tensor scale at any image point is proposed and results of its application on several real images are presented. Also, performance of the tensor scale computation method under image rotation, varying pixel size, and background inhomogeneity is studied. Results of a quantitative analysis evaluating performance of the method on 2D brain phantom images at various levels of noise and blur, and a fixed background inhomogeneity are presented. Agreement between tensor scale images computed on matching image slices from two 3D magnetic resonance data acquired simultaneously using different protocols are demonstrated. Finally, the application of tensor scale in anisotropic diffusive image filtering is presented that encourages smoothing inside a homogeneous region and also along edges and elongated structures while discourages blurring across them. Both qualitative and quantitative results of application of the new filtering method have been presented and compared with the results obtained by spherical scale-based and standard diffusive filtering methods.  相似文献   

2.
3.
Virtually all previous classifier models take vectors as inputs, performing directly based on the vector patterns. But it is highly necessary to consider images as matrices in real applications. In this paper, we represent images as second order tensors or matrices. We then propose two novel tensor algorithms, which are referred to as Maximum Margin Multisurface Proximal Support Tensor Machine (M3PSTM) and Maximum Margin Multi-weight Vector Projection Support Tensor Machine (M3VSTM), for classifying and segmenting the images. M3PSTM and M3VSTM operate in tensor space and aim at computing two proximal tensor planes for multisurface learning. To avoid the singularity problem, maximum margin criterion is used for formulating the optimization problems. Thus the proposed tensor classifiers have an analytic form of projection axes and can achieve the maximum margin representations for classification. With tensor representation, the number of estimated parameters is significantly reduced, which makes M3PSTM and M3VSTM more computationally efficient when handing the high-dimensional datasets than applying the vector representations based methods. Thorough image classification and segmentation simulations on the benchmark UCI and real datasets verify the efficiency and validity of our approaches. The visual and numerical results show M3PSTM and M3VSTM deliver comparable or even better performance than some state-of-the-art classification algorithms.  相似文献   

4.
In this paper we propose a new method for extending 1-D step edge detection filters to two dimensions via complex-valued filtering. Complex-valued filtering allows us to obtain edge magnitude and direction simultaneously. Our method can be viewed either as an extension of n-directional complex filtering of Paplinski to infinite directions or as a variant of Canny’s gradient-based approach. In the second view, the real part of our filter computes the gradient in the x direction and the imaginary part computes the gradient in the y direction. Paplinski claimed that n-directional filtering is an improvement over the gradient-based method, which computes gradient only in two directions. We show that our omnidirectional and Canny’s gradient-based extensions of the 1-D DoG coincide. In contrast to Paplinski’s claim, this coincidence shows that both approaches suffer from being confined to the subspace of two 2-D filters, even though n-directional filtering hides these filters in a single complex-valued filter. Aside from these theoretical results, the omnidirectional method has practical advantages over both n-directional and gradient-based approaches. Our experiments on synthetic and real-world images show the superiority of omnidirectional and gradient-based methods over n-directional approach. In comparison with the gradient-based method, the advantage of omnidirectional method lies mostly in freeing the user from specifying the smoothing window and its parameter. Since the omnidirectional and Canny’s gradient-based extensions of the 1-D DoG coincide, we have based our experiments on extending the 1-D Demigny filter. This filter has been proposed by Demigny as the optimal edge detection filter in sampled images.  相似文献   

5.
张量尺度是一种基于图像几何形状的特征描述子,由于其特征提取过程计算复杂度较高,不适合于快速的基于内容的图像检索。提出一种基于图像森林变换的张量尺度特征提取快速算法,并采用归一化的张量尺度方向直方图作为图像几何形状的特征描述子,与相似性度量标准结合,实现了一种具有图像平移、旋转、尺度变换不变特性的基于内容的图像检索算法。与现有的张量尺度计算方法相比,该算法具有较低的计算复杂度,仿真实验结果证明算法的有效性。  相似文献   

6.
针对当前图像去噪算法缺乏对整体结构的分析以及运算量过大的不足,提出了一种利用波域调和滤波扩散模型改进BM3D去噪技术的新算法。首先,利用传统的欧氏距离法将相似二维图像块合并,得到三维数组,再将联合滤波后的三维数组进行逆变换,得到图像的预估计数据。其次,通过小波分解变换提取预估计图像中的高频部分进行滤波,为避免边缘模糊,引用拉普拉斯高斯算法构建新算子并将其代入扩散模型。最后,进行小波重构,以得到原始图像的最终逼近,从而均衡运算速度和去噪性能,保护图像完整的结构信息。实验结果表明,新算法的去噪性能优异,内部信息保护更具完整性,运算速度合理,有利于实际应用。  相似文献   

7.
Tensor provides a better representation for image space by avoiding information loss in vectorization. Nonnegative tensor factorization (NTF), whose objective is to express an n-way tensor as a sum of k rank-1 tensors under nonnegative constraints, has recently attracted a lot of attentions for its efficient and meaningful representation. However, NTF only sees Euclidean structures in data space and is not optimized for image representation as image space is believed to be a sub-manifold embedded in high-dimensional ambient space. To avoid the limitation of NTF, we propose a novel Laplacian regularized nonnegative tensor factorization (LRNTF) method for image representation and clustering in this paper. In LRNTF, the image space is represented as a 3-way tensor and we explicitly consider the manifold structure of the image space in factorization. That is, two data points that are close to each other in the intrinsic geometry of image space shall also be close to each other under the factorized basis. To evaluate the performance of LRNTF in image representation and clustering, we compare our algorithm with NMF, NTF, NCut and GNMF methods on three standard image databases. Experimental results demonstrate that LRNTF achieves better image clustering performance, while being more insensitive to noise.  相似文献   

8.
Thinning algorithms based on quadtree and octree representations   总被引:1,自引:0,他引:1  
Thinning is a critical pre-processing step to obtain skeletons for pattern analysis. Quadtree and octree are hierarchical data representations in image processing and computer graphics. In this paper, we present new 2-D area-based and 3-D surface-based thinning algorithms for directly converting quadtree and octree representations to skeletons. The computational complexity of our thinning algorithm for a 2-D or a 3-D image with each length N is respectively O(N2) or O(N3), which is more efficient than the existing algorithms of O(N3) or O(N4). Furthermore, our thinning algorithms can lessen boundary noise spurs and are suited for parallel implementation.  相似文献   

9.
This article presents a novel method for mean filtering that reduces the required number of additions and eliminates the need for division altogether. The time reduction is achieved using basic store-and-fetch operations and is irrespective of the image or neighbourhood size. This method has been tested on a variety of greyscale images and neighbourhood sizes with promising results. These results indicate that the relative time requirement reduces with increase in image size. The method's efficiency also improves significantly with increase in neighbourhood size thereby making it increasingly useful when dealing with large images.  相似文献   

10.
基于微粒群算法的二维最大熵图像分割方法   总被引:4,自引:4,他引:4  
该文研究了基于二维最大熵的图像分割方法,针对二维最大熵图像分割方法求取阈值时存在的计算复杂、时间长、实用性差等问题,提出了基于微粒群算法的二维最大熵图像分割方法.该方法运用微粒群算法对图像的二维阈值空间进行全局搜索,并将搜索得到的二维熵最大值所对应的点灰度-区域灰度均值对作为阈值进行图像分割.实验结果表明,由于该方法考虑了点灰度和区域灰度均值,且采用了离散的全局搜索算法,所以不仅得到了令人满意的分割效果,而且大大的提高了计算速度,是一种实用有效的图像分割方法.  相似文献   

11.
Preserving topological properties of objects during thinning procedures is an important issue in the field of image analysis. In the case of 2-D digital images (i.e. images defined on ℤ2) such procedures are usually based on the notion of simple point. In contrast to the situation in ℤ n , n≥3, it was proved in the 80s that the exclusive use of simple points in ℤ2 was indeed sufficient to develop thinning procedures providing an output that is minimal with respect to the topological characteristics of the object. Based on the recently introduced notion of minimal simple set (generalising the notion of simple point), we establish new properties related to topology-preserving thinning in 2-D spaces which extend, in particular, this classical result to cubical complexes in 2-D pseudomanifolds.  相似文献   

12.
General Adaptive Neighborhood Choquet Image Filtering   总被引:1,自引:0,他引:1  
A novel framework entitled General Adaptive Neighborhood Image Processing (GANIP) has been recently introduced in order to propose an original image representation and mathematical structure for adaptive image processing and analysis. The central idea is based on the key notion of adaptivity which is simultaneously associated with the analyzing scales, the spatial structures and the intensity values of the image to be addressed. In this paper, the GANIP framework is briefly exposed and particularly studied in the context of Choquet filtering (using fuzzy measures), which generalizes a large class of image filters. The resulting spatially-adaptive operators are studied with respect to the general GANIP framework and illustrated in both the biomedical and materials application areas. In addition, the proposed GAN-based filters are practically applied and compared to several other denoising methods through experiments on image restoration, showing a high performance of the GAN-based Choquet filters.
Jean-Charles PinoliEmail:
  相似文献   

13.
In this paper a data hiding method is proposed based on the combination of a secret sharing technique and a novel steganography method using integer wavelet transform. In this method in encoding phase, first a secret image is shared into n shares, using a secret sharing technique. Then, the shares and Fletcher-16 checksum of shares are hidden into n cover images using proposed wavelet based steganography method. In decoding phase, t out of n stego images are required to recover the secret image. In this phase, first t shares and their checksums are extracted from t stego images. Then, by using the Lagrange interpolation the secret image is revealed from the t shares. The proposed method is stable against serious attacks, including RS and supervisory training steganalysis methods, it has the lowest detection rate under global feature extraction classifier examination compared to the state-of-the-art techniques. Experimental results on a set of benchmarks showed that this method outperforms conventional methods in offering a high secure and robust mechanism for joining secret image sharing and steganography.  相似文献   

14.
In this paper, we propose a fast 3-D facial shape recovery algorithm from a single image with general, unknown lighting. In order to derive the algorithm, we formulate a nonlinear least-square problem with two parameter vectors which are related to personal identity and light conditions. We then combine the spherical harmonics for the surface normals of a human face with tensor algebra and show that in a certain condition, the dimensionality of the least-square problem can be further reduced to one-tenth of the regular subspace-based model by using tensor decomposition (N-mode SVD), which greatly speeds up the computations. In order to enhance the shape recovery performance, we have incorporated prior information in updating the parameters. In the experiment, the proposed algorithm takes less than 0.4 s to reconstruct a face and shows a significant performance improvement over other reported schemes.  相似文献   

15.
基于预测梯度的图像插值算法   总被引:2,自引:0,他引:2  
陆志芳  钟宝江 《自动化学报》2018,44(6):1072-1085
提出一种新的非线性图像插值算法,称为基于预测梯度的图像插值(Image interpolation with predicted gradients,PGI).首先沿用现有的边缘对比度引导的图像插值(Contrast-guided image interpolation,CGI)算法思想对低分辨率图像中的边缘进行扩散处理,然后预测高分辨率图像中未知像素的性质,最后对边缘像素采用一维有方向的插值,对非边缘像素采用二维无方向的插值.与通常的非线性图像插值算法相比,新算法对图像边缘信息的理解更为完善.与CGI算法相比,由于梯度预测策略的使用,PGI算法能够更有效地确定未知像素的相关性质(是否为边缘像素,以及是边缘像素时其边缘方向).实验结果表明,PGI算法无论在视觉效果还是客观性测评指标方面均优于现有的图像插值算法.此外,在对彩色图像进行插值时,本文将通常的RGB颜色空间转化为Lab颜色空间,不仅减少了伪彩色的生成,而且降低了算法的时间复杂度.  相似文献   

16.
We present a new fast spatial averaging technique that efficiently implements operations for spatial averaging or two-dimensional mean filtering. To perform spatial averaging of an M×N image with an averaging filter of size m×n, our proposed method requires approximately 4MN additions and no division. This is very promising, since the major computations required by our algorithm depend only on the size of the original image but not on the size of the averaging filter. To our knowledge, this technique requires the smallest number of additions for mean filtering. Experimental results on various image sizes using different filter sizes confirm that our fast spatial averaging algorithm is significantly faster than other spatial averaging algorithms, especially when the size of the input image is very large.  相似文献   

17.
This paper presents a method for sharing and hiding secret images. The method is modified from the (t,n) threshold scheme. (Comput.Graph. 26(5)(2002)765) The given secret image is shared and n shadow images are thus generated. Each shadow image is hidden in an ordinary image so as not to attract an attacker's attention. Any t of the n hidden shadows can be used to recover the secret image. The size of each stego image (in which a shadow image is hidden) is about 1/t of that of the secret image, avoiding the need for much storage space and transmission time (in the sense that the total size of t stego images is about the size of the secret image). Experimental results indicate that the qualities of both the recovered secret image and the stego images that contain the hidden shadows are acceptable. The photographers who work in enemy areas can use this system to transmit photographs.  相似文献   

18.
Image interpolation is a very important branch in image processing. It is widely used in imaging world, for example, image interpolation is often used in 3-D medical image to compensate for information insufficiency during image reconstruction by simulating additional images between two-dimensional images. Reversible data hiding has become significant branch in information hiding field. Reversibility allows the original media to be completely restored without any degradation after the embedded messages have been extracted. This study proposes a high-capacity image hiding scheme by exploiting an interpolating method called Interpolation by Neighboring Pixels (INP) on Maximum Difference Values to improve the performance of data hiding scheme proposed by Jung and Yoo. The proposed scheme offers the benefits of high embedding capacity with low computational complexity and good image quality. The experimental results showed that the proposed scheme has good performance for payload up to 2.28 bpp. Moreover, the INP yields higher PSNRs than other interpolating methods such as NMI, NNI and BI.  相似文献   

19.
Artificial Color filters are designed to attenuate some pixels and pass others. The pass/attenuate decision is made on the basis of the learned association of spectral components with user-defined concepts. In earlier work, it has been shown that there are various ways to design Artificial Color filters using multiple user-designated classes and those filters are subjected to useful manipulations such as image processing and Boolean Aggregation.The Artificial Color filtering has always been binary. Therefore, the Boolean logic was the only choice for aggregating filters. This paper shows how to fuzzify Artificial Color filters. Fuzzy logic subsumes Boolean logic and can do so in many ways. Several different fuzzy T-norms are applied to Artificial Color filters to illustrate the richness in aggregation. Margin Setting, a supervised statistical pattern recognition method to train the filters, is very conservative in what is definitely assigned to a class (μ=1) while allowing a useful gradation of membership (μ?1) for other cases. A parametric exploration of these effects for an image is presented.  相似文献   

20.
Consider the black box interpolation of a τ-sparse, n-variate rational function f, where τ is the maximum number of terms in either numerator or denominator. When numerator and denominator are at most of degree d, then the number of possible terms in f is O(dn) and explodes exponentially as the number of variables increases. The complexity of our sparse rational interpolation algorithm does not depend exponentially on n anymore. It still depends on d because we densely interpolate univariate auxiliary rational functions of the same degree. We remove the exponent n and introduce the sparsity τ in the complexity by reconstructing the auxiliary function’s coefficients via sparse multivariate interpolation.The approach is new and builds on the normalization of the rational function’s representation. Our method can be combined with probabilistic and deterministic components from sparse polynomial black box interpolation to suit either an exact or a finite precision computational environment. The latter is illustrated with several examples, running from exact finite field arithmetic to noisy floating point evaluations. In general, the performance of our sparse rational black box interpolation depends on the choice of the employed sparse polynomial black box interpolation. If the early termination Ben-Or/Tiwari algorithm is used, our method achieves rational interpolation in O(τd) black box evaluations and thus is sensitive to the sparsity of the multivariate f.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号