首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Adaptive smoothing via contextual and local discontinuities   总被引:4,自引:0,他引:4  
A novel adaptive smoothing approach is proposed for noise removal and feature preservation where two distinct measures are simultaneously adopted to detect discontinuities in an image. Inhomogeneity underlying an image is employed as a multiscale measure to detect contextual discontinuities for feature preservation and control of the smoothing speed, while local spatial gradient is used for detection of variable local discontinuities during smoothing. Unlike previous adaptive smoothing approaches, two discontinuity measures are combined in our algorithm for synergy in preserving nontrivial features, which leads to a constrained anisotropic diffusion process that inhomogeneity offers intrinsic constraints for selective smoothing. Thanks to the use of intrinsic constraints, our smoothing scheme is insensitive to termination times and the resultant images in a wide range of iterations are applicable to achieve nearly identical results for various early vision tasks. Our algorithm is formally analyzed and related to anisotropic diffusion. Comparative results indicate that our algorithm yields favorable smoothing results, and its application in extraction of hydrographic objects demonstrates its usefulness as a tool for early vision.  相似文献   

2.
Density estimation employed in multi‐pass global illumination algorithms give cause to a trade‐off problem between bias and noise. The problem is seen most evident as blurring of strong illumination features. In particular, this blurring erodes fine structures and sharp lines prominent in caustics. To address this problem, we introduce a photon mapping algorithm based on nonlinear anisotropic diffusion. Our algorithm adapts according to the structure of the photon map such that smoothing occurs along edges and structures and not across. In this way, we preserve important illumination features, while eliminating noise. We demonstrate the applicability of our algorithm through a series of tests. In the tests, we evaluate the visual and computational performance of our algorithm comparing it to existing popular algorithms.  相似文献   

3.
《Advanced Robotics》2013,27(6):603-624
This paper studies the motion control of a multiple manipulator free-flying space robot chasing a passive object in near proximity. Free-flyer kinematics are developed using a minimum set of body-fixed barycentric vectors. Using a general and a quasi-coordinate Lagrangian formulation, equations of motion for model-based controllers are derived. Two model-based and one transposed Jacobian control algorithms are developed that allow coordinated tracking control of the manipulators and the spacecraft. In particular, an Euler parameter model-based control algorithm is presented that overcomes the non-physical singularities due to Euler angle representation of attitude. To ensure smooth operation, and reduce disturbances on the spacecraft and on the object just before grasping, appropriate trajectories for the motion of spacecraft/manipulators are planned. The performance of model-based algorithms is compared, by simulation, to that of a transposed Jacobian algorithm. Results show that due to the complexity of space robotic systems, a drastic deterioration in the performance of model-based algorithms in the presence of model uncertainties results. In such cases, a simple transposed Jacobian algorithm yields comparable results with much reduced computational burden, an issue which is very important in space.  相似文献   

4.
Protein function prediction is an important problem in functional genomics. Typically, protein sequences are represented by feature vectors. A major problem of protein datasets that increase the complexity of classification models is their large number of features. Feature selection (FS) techniques are used to deal with this high dimensional space of features. In this paper, we propose a novel feature selection algorithm that combines genetic algorithms (GA) and ant colony optimization (ACO) for faster and better search capability. The hybrid algorithm makes use of advantages of both ACO and GA methods. Proposed algorithm is easily implemented and because of use of a simple classifier in that, its computational complexity is very low. The performance of proposed algorithm is compared to the performance of two prominent population-based algorithms, ACO and genetic algorithms. Experimentation is carried out using two challenging biological datasets, involving the hierarchical functional classification of GPCRs and enzymes. The criteria used for comparison are maximizing predictive accuracy, and finding the smallest subset of features. The results of experiments indicate the superiority of proposed algorithm.  相似文献   

5.
各向异性扩散平滑滤波的改进算法   总被引:11,自引:2,他引:11       下载免费PDF全文
图像的噪声过滤和增强是数字图像处理中非常重要的组成部分.在图像处理过程中,为了既有效地去除噪声,又能够较好地保持图像的边缘和重要的细节信息,在Perona-Malik各向异性扩散模型(PM模型)的基础上,通过对变分方法的扩散方程中扩散系数的改进,提出了一个对噪声图像更有效更具有适应性的去噪扩散模型.该模型针对不同的梯度大小采用了不同的扩散系数.在实际处理过程中该模型不仅能够有效地保持图像的边缘,而且还能够克服PM模型对小尺度噪声敏感和部分边缘和细节失真的问题.实验结果表明,改进的扩散模型的性能优于PM模型,是一种较为理想的保边缘平滑模型.  相似文献   

6.
一种基于小波子带各向异性扩散方程的图像平滑方法   总被引:1,自引:1,他引:0  
根据小波变换对图象的多分辨率表示的同时将不同边缘归结到不同的小波子带,文章提出了将改进的退化扩散方程结合图象的不同小波子带的图象平滑方法。这样,对高频子带,扩散方向由子带方向确定,扩散速度由梯度幅值和二阶导数决定; 而在灰度变化不大的低频子带,采用传导系数为常数的热方程进行平滑。实验证明,该方法在对图象进行平滑的同时,较好地保留了图象的线条边缘特征和角点特征,同时算法复杂性大为降低。  相似文献   

7.
The increasing demand for higher resolution images and higher frame rate videos will always pose a challenge to computational power when real-time performance is required to solve the stereo-matching problem in 3D reconstruction applications. Therefore, the use of asymptotic analysis is necessary to measure the time and space performance of stereo-matching algorithms regardless of the size of the input and of the computational power available. In this paper, we survey several classic stereo-matching algorithms with regard to time–space complexity. We also report running time experiments for several algorithms that are consistent with our complexity analysis. We present a new dense stereo-matching algorithm based on a greedy heuristic path computation in disparity space. A procedure which improves disparity maps in depth discontinuity regions is introduced. This procedure works as a post-processing step for any technique that solves the dense stereo-matching problem. We prove that our algorithm and post-processing procedure have optimal O(n) time–space complexity, where n is the size of a stereo image. Our algorithm performs only a constant number of computations per pixel since it avoids a brute force search over the disparity range. Hence, our algorithm is faster than “real-time” techniques while producing comparable results when evaluated with ground-truth benchmarks. The correctness of our algorithm is demonstrated with experiments in real and synthetic data.  相似文献   

8.
低剂量计算机断层扫描技术(Low-Dose Computed Tomography,LDCT)降低了X射线对人体的辐射,但射线剂量降低造成重建图像中存在严重的伪影和噪声,对临床医学诊断有很大干扰。针对此问题,提出一种改进的各向异性加权先验模型的最大后验(Maximum A Posteriori,MAP)投影域降噪算法。该算法考虑到直觉模糊熵能够有效区分平滑区域和边缘细节区域,将其与传统的各向异性扩散系数相结合,构造了一种新的扩散系数,并采用局部方差实现其自适应调节;最后将该扩散系数融合于基于Huber先验的MAP优化估计算法框架中,实现对投影数据不同区域进行不同强度的降噪处理。该算法分别采用数字骨盆模型、Shepp-Logan头模型和数字胸腔模型三种体模进行验证,并与滤波反投影重建算法(Filter Back Projection,FBP)、惩罚重加权最小二乘法(Penalized Reweighted Least-Squares,PRWLS)、各向异性加权先验正弦图平滑算法进行对比。实验结果表明,利用所提算法重建出的图像中伪影明显减少,同时较好地保持了图像的边缘和细节信息。三种体模的信噪比分别为20.502 0 dB、23.294 8 dB、21.018 4 dB,所需时间分别为49.50 s、49.60 s、8.59 s。  相似文献   

9.
本文提出一种复合各向异性扩散滤波算法,将降斑各向异性扩散(Speckle reducing anisotropic diffusion,SRAD)模型中对边缘敏感的瞬态系数(Instantaneuos coefficient of variation,ICOV)算子运用到了非线性相干扩散(Nonlinear coherent diffusion,NCD)相干模型中,并基于统计学提出ICOV算子的相关系数矩阵对图像的相关度进行度量,系数矩阵的值是每个ICOV算子与其所在行与列的相关度,此相关度的值在边缘附近会取到极大值,这个对图像的边缘检测有很好的度量,根据每个像素与其周围像素的相关度对边缘附近的扩散的强度进行修改,对图像进行更为之有效、更准确的非线性去噪与边缘加强.实验结果表明,与其他各向异性算法相比,本算法可获得更好的性能指标,具有更好的去噪效果和保留边缘功能.  相似文献   

10.
We present a segmentation method of natural images that uses an anisotropic diffusion algorithm and a region growing algorithm. We propose a modified version of the anisotropic diffusion algorithm as a precise edge-preserving smoothing technique modified by using boundary edges. We incorporate a linking algorithm for boundary edges based on a directional potential function into the anisotropic diffusion algorithm to improve the ability of edge-preserving smoothing. As a result, unnecessary details of images are effectively smoothed before performing a region growing algorithm. Therefore, the proposed method is suitable for an accurate segmentation of natural images. Several simulated examples are presented that demonstrate the effectiveness of the proposed technique.  相似文献   

11.
降维是处理高维数据的一项关键技术,其中线性判别分析及其变体算法均为有效的监督算法。然而大多数判别分析算法存在以下缺点:a)无法选择更具判别性的特征;b)忽略原始空间中噪声和冗余特征的干扰;c)更新邻接图的计算复杂度高。为了克服以上缺点,提出了基于子空间学习的快速自适应局部比值和判别分析算法。首先,提出了统一比值和准则及子空间学习的模型,以在子空间中探索数据的潜在结构,选择出更具判别信息的特征,避免受原始空间中噪声的影响;其次,采用基于锚点的策略构造邻接图来表征数据的局部结构,加速邻接图学习;然后,引入香农熵正则化,以避免平凡解;最后,在多个数据集上进行了对比实验,验证了算法的有效性。  相似文献   

12.
Adaptive smoothing: a general tool for early vision   总被引:18,自引:0,他引:18  
A method to smooth a signal while preserving discontinuities is presented. This is achieved by repeatedly convolving the signal with a very small averaging mask weighted by a measure of the signal continuity at each point. Edge detection can be performed after a few iterations, and features extracted from the smoothed signal are correctly localized (hence, no tracking is needed). This last property allows the derivation of a scale-space representation of a signal using the adaptive smoothing parameter k as the scale dimension. The relation of this process to anisotropic diffusion is shown. A scheme to preserve higher-order discontinuities and results on range images is proposed. Different implementations of adaptive smoothing are presented, first on a serial machine, for which a multigrid algorithm is proposed to speed up the smoothing effect, then on a single instruction multiple data (SIMD) parallel machine such as the Connection Machine. Various applications of adaptive smoothing such as edge detection, range image feature extraction, corner detection, and stereo matching are discussed  相似文献   

13.
RELIEF is considered one of the most successful algorithms for assessing the quality of features. In this paper, we propose a set of new feature weighting algorithms that perform significantly better than RELIEF, without introducing a large increase in computational complexity. Our work starts from a mathematical interpretation of the seemingly heuristic RELIEF algorithm as an online method solving a convex optimization problem with a margin-based objective function. This interpretation explains the success of RELIEF in real application and enables us to identify and address its following weaknesses. RELIEF makes an implicit assumption that the nearest neighbors found in the original feature space are the ones in the weighted space and RELIEF lacks a mechanism to deal with outlier data. We propose an iterative RELIEF (I-RELIEF) algorithm to alleviate the deficiencies of RELIEF by exploring the framework of the expectation-maximization algorithm. We extend I-RELIEF to multiclass settings by using a new multiclass margin definition. To reduce computational costs, an online learning algorithm is also developed. Convergence analysis of the proposed algorithms is presented. The results of large-scale experiments on the UCI and microarray data sets are reported, which demonstrate the effectiveness of the proposed algorithms, and verify the presented theoretical results  相似文献   

14.
为了实现有效的能量利用和数据传输,针对无线EH-MIMO协作系统,提出了在节点的多根天线中选择部分天线进行能量收集和部分天线进行数据传输的遍历最优算法。选择结果即是所提出的EH-MIMO协作模型信道容量最大化问题的遍历解决方案,因此在容量性能方面是最优的。为了降低算法的复杂度,进一步提出了递增天线选择和递减天线选择的两种次优天线选择算法,并与最优算法进行了对比。仿真结果表明,次优算法在系统信道容量和能量效率方面接近最优算法,同时具有较低的复杂度。  相似文献   

15.
Reliable object recognition is an essential part of most visual systems. Model-based approaches to object recognition use a database (a library) of modeled objects; for a given set of sensed data, the problem of model-based recognition is to identify and locate the objects from the library that are present in the data. We show that the complexity of model-based recognition depends very heavily on the number of object models in the library even if each object is modeled by a small number of discrete features. Specifically, deciding whether a discrete set of sensed data can be interpreted as transformed object models from a given library is NP-complete if the transformation is any combination of translation, rotation, scaling, and perspective projection. This suggests that efficient algorithms for model-based recognition must use additional structure to avoid the inherent computational difficulties. © 1998 John Wiley & Sons, Inc.  相似文献   

16.
Unsupervised data clustering can be addressed by the estimation of mixture models, where the mixture components are associated to clusters in data space. In this paper we present a novel unsupervised classification algorithm based on the simultaneous estimation of the mixture’s parameters and the number of components (complexity). Its distinguishing aspect is the way the data space is searched. Our algorithm starts from a single component covering all the input space and iteratively splits components according to breadth first search on a binary tree structure that provides an efficient exploration of the possible solutions. The proposed scheme demonstrates important computational savings with respect to other state-of-the-art algorithms, making it particularly suited to scenarios where the performance time is an issue, such as in computer and robot vision applications. The initialization procedure is unique, allowing a deterministic evolution of the algorithm, while the parameter estimation is performed with a modification of the Expectation Maximization algorithm. To compare models with different complexity we use the Minimum Message Length information criteria that implement the trade-off between the number of components and data fit log-likelihood. We validate our new approach with experiments on synthetic data, and we test and compare to related approaches its computational efficiency in data-intensive image segmentation applications.  相似文献   

17.
鉴于传统属性选择算法无法捕捉属性之间的关系的问题,文中提出了一种非线性属性选择方法。该方法通过引入核函数,将原始数据集投影到高维的核空间,因在核空间内进行运算,进而可以考虑到数据属性之间的关系。由于核函数自身的优越性,即使数据通过高斯核投影到无穷维的空间中,计算复杂度亦可以控制得较小。在正则化因子的限制上,使用两种范数进行双重约束,不仅提高了算法的准确率,而且使得算法实验结果的方差仅为0.74,远小于其他同类对比算法,且算法更加稳定。在8个常用的数据集上将所提算法与6个同类算法进行比较,并用SVM分类器来测试分类准确率,最终该算法得到最少1.84%,最高3.27%,平均2.75%的提升。  相似文献   

18.
现有块对角化(BD)预编码系统用户选择算法较少考虑利用已选用户与剩余用户之间的关系来排除不可选用户.针对这点不足,给出了一种采用码本聚类的低复杂度用户选择算法( CodeGreedy算法).该算法采用弦距离刻画用户信道的相关性,以此为依据将用户划分到不同码本空间中,聚集在同一码本空间的用户信道具有较强的相关性,形成互斥...  相似文献   

19.
Recently, lots of smoothing techniques have been presented for maneuvering target tracking. Interacting multiple model-probabilistic data association (IMM-PDA) fixed-lag smoothing algorithm provides an efficient solution to track a maneuvering target in a cluttered environment. Whereas, the smoothing lag of each model in a model set is a fixed constant in traditional algorithms. A new approach is developed in this paper. Although this method is still based on IMM-PDA approach to a state augmented system, it adopts different smoothing lag according to diverse degrees of complexity of each model. As a result, the application is more flexible and the computational load is reduced greatly. Some simulations were conducted to track a highly maneuvering target in a cluttered environment using two sensors. The results illustrate the superiority of the proposed algorithm over comparative schemes, both in accuracy of track estimation and the computational load.  相似文献   

20.
提出了一种基于多层网格(MG)和广义极小残余(GMRES)算法相结合的图像超分辨率重建快速算法.首先采用正则化方法给出图像超分辨率重建模型;然后在系统介绍MG和GMRES算法的基础上,针对图像超分辨率重建中非对称线性稀疏方程的求解,提出多层网格-广义极小残余(MG-GMRES)算法;详细讨论了MG-GMRES算法的光滑、限制、插值操作以及计算复杂度.实验研究表明该算法的重建结果相当有效,与MG、GMRES和Richrdson迭代相比,具有更快的收敛速度.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号