首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 648 毫秒
1.
A novel image decomposition approach and its applications   总被引:1,自引:0,他引:1  
The current state-of-the-art edge-preserving decomposition techniques may not be able to fully separate textures while preserving edges. This may generate artifacts in some applications, e.g., edge detection, texture transfer, etc. To solve this problem, a novel image decomposition approach based on explicit texture separation from large scale components of an image is presented. We first apply a Gaussian structure-texture decomposition, to separate the majority of textures out of the input image. However, residual textures are still visible around the strong edges. To remove these residuals, an asymmetric sampling operator is proposed and followed by a joint bilateral correction to remove an excessive blur effect. We demonstrate that our approach is well suited for the tasks such as texture transfer, edge detection, non-photorealistic rendering, and tone mapping. The results show our approach outperforms existing state-of-the-art image decomposition approaches.  相似文献   

2.
3.
Vector quantization(VQ) can perform efficient feature extraction from electrocardiogram (ECG) with the advantages of dimensionality reduction and accuracy increase. However, the existing dictionary learning algorithms for vector quantization are sensitive to dirty data, which compromises the classification accuracy. To tackle the problem, we propose a novel dictionary learning algorithm that employs k-medoids cluster optimized by k-means++ and builds dictionaries by searching and using representative samples, which can avoid the interference of dirty data, and thus boost the classification performance of ECG systems based on vector quantization features. We apply our algorithm to vector quantization feature extraction for ECG beats classification, and compare it with popular features such as sampling point feature, fast Fourier transform feature, discrete wavelet transform feature, and with our previous beats vector quantization feature. The results show that the proposed method yields the highest accuracy and is capable of reducing the computational complexity of ECG beats classification system. The proposed dictionary learning algorithm provides more efficient encoding for ECG beats, and can improve ECG classification systems based on encoded feature.  相似文献   

4.
The construction of the neighborhood is a critical problem of manifold learning. Most of manifold learning algorithms use a stable neighborhood parameter (such as k-NN), but it may not work well for the entire manifold, since manifold curvature and sampling density may vary over the manifold. Although some dynamical neighborhood algorithms have been proposed, they are limited by either another global parameter or an assumption. This paper proposes a new approach to select the dynamical neighborhood for each point while constructing the tangent subspace based on the sampling density and the manifold curvature. And the parameters of the approach can be automatically determined by computing the correlation coefficient of the matrices of geodesic distances between pairs of points in input and output spaces. When we apply it to ISOMAP, the results of experiments on the synthetic data as well as the real world patterns demonstrate that the proposed approach can efficiently maintain an accurate low dimensional representation of the manifold data with less distortion, and give higher average classification rate compared to others.  相似文献   

5.
The finite Gaussian mixture model is one of the most popular frameworks to model classes for probabilistic model-based image segmentation. However, the tails of the Gaussian distribution are often shorter than that required to model an image class. Also, the estimates of the class parameters in this model are affected by the pixels that are atypical of the components of the fitted Gaussian mixture model. In this regard, the paper presents a novel way to model the image as a mixture of finite number of Student’s t-distributions for image segmentation problem. The Student’s t-distribution provides a longer tailed alternative to the Gaussian distribution and gives reduced weight to the outlier observations during the parameter estimation step in finite mixture model. Incorporating the merits of Student’s t-distribution into the hidden Markov random field framework, a novel image segmentation algorithm is proposed for robust and automatic image segmentation, and the performance is demonstrated on a set of HEp-2 cell and natural images. Integrating the bias field correction step within the proposed framework, a novel simultaneous segmentation and bias field correction algorithm has also been proposed for segmentation of magnetic resonance (MR) images. The efficacy of the proposed approach, along with a comparison with related algorithms, is demonstrated on a set of real and simulated brain MR images both qualitatively and quantitatively.  相似文献   

6.
广义Hough变换:多个圆的快速随机检测   总被引:17,自引:0,他引:17  
以随机采样到的2个图像点及在此2点的中垂线上搜索第3个图像点来确定候选圆.当随机采样2个图像点时,通过剔除孤立、半连续噪声点减少了无效采样;当搜索候选圆的第3点时,剔除上述2种噪声点、非共圆点并给出快速确认候选圆是否为真圆的方法,尽可能减少无效计算.数值实验结果表明:文中算法能快速检测多个圆.在检测多个圆并且具有噪声的情况下,与随机圆检测算法相比,其检测速度快一个数量级.  相似文献   

7.
The application of spaceborne lidar data to mapping of ecosystem structure is currently limited by the relatively small fraction of the earth's surface sampled by these sensors; this limitation is likely to remain over the next generation of lidar missions. Currently planned lidar missions will collect transects of data with contiguous observations along each transect; transects will be spread over swaths of multiple kilometers, a sampling pattern that results in high sampling density along track and low sampling density across track. In this work we demonstrate the advantages of a hybrid spatial sampling approach that combines a single conventional transect with a systematic grid of observations. We compare this hybrid approach to traditional lidar sampling that distributes the same number of observations into five transects. We demonstrate that a hybrid sampling approach achieves benchmarks for the spatial distribution of observations in approximately 1/3 of the time required for transect sampling and results in estimates of ecosystem height that have half the uncertainty as those from transect sampling. This type of approach is made possible by a suite of technologies, known together as Electronically Steerable Flash Lidar. A spaceborne sensor with the flexibility of this technology would produce estimates of ecosystem structure that are more reliable and spatially complete than a similar number of observations collected in transects and should be considered for future lidar remote sensing missions.  相似文献   

8.
In this paper, we propose an edge detection technique based on some local smoothing of the image followed by a statistical hypothesis testing on the gradient. An edge point being defined as a zero-crossing of the Laplacian, it is said to be a significant edge point if the gradient at this point is larger than a threshold s(ε) defined by: if the image I is pure noise, then the probability of ∥∇I(x)∥?s(ε) conditionally on ΔI(x)=0 is less than ε. In other words, a significant edge is an edge which has a very low probability to be there because of noise. We will show that the threshold s(ε) can be explicitly computed in the case of a stationary Gaussian noise. In the images we are interested in, which are obtained by tomographic reconstruction from a radiograph, this method fails since the Gaussian noise is not stationary anymore. Nevertheless, we are still able to give the law of the gradient conditionally on the zero-crossing of the Laplacian, and thus compute the threshold s(ε). We will end this paper with some experiments and compare the results with those obtained with other edge detection methods.  相似文献   

9.
Lp范数压缩感知图像重建优化算法   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 压缩感知理论中的重构算法作为关键技术之一,在科学研究方面起到了关键的作用。常用的重构算法包括L0范数的非凸优化算法和L1范数的凸优化算法,但它们的缺点是重构精度不高,运算时间很长。为了克服这一缺陷,提高现有基于Lp范数的压缩感知图像重构算法的重建精度和算法效率,本文提出改进算法。方法 针对拉格朗日函数序列二次规划(SQP)方法中海瑟(Hesse)矩阵不正定导致计算量很大的问题,引入价值函数,修正Hesse矩阵的序列二次规划方法并结合图像分块压缩感知技术,提出了一种基于LP范数压缩感知图像重构算法。结果 在采样率同为40%情况下,本文算法下的信噪比为34.28 dB,高于BOMP(block orthogonal matching pursuit)算法信噪比2%,高于当罚函数作为修正方法时的13.2%。本文算法计算时间为190.55 s,快于BOMP算法13.4%,快于当罚函数作为修正方法时的67.5%。采样率同为50%的情况下,本文算法下的信噪比为35.42 dB,高BOMP算法信噪比2.4%,高于当罚函数作为修正方法时信噪比12.8%。本文算法的计算时间是196.67 s,快于BOMP算法68.2%,快于81.7%。在采样率同为60%的情况下,本文算法的信噪比为36.33 dB,高于BOMP算法信噪比3.2%,高于当罚函数作为修正方法时信噪比8.2%。本文算法计算时间为201.72 s,快于BOMP算法82.3%,快于当罚函数作为修正方法时86.6%。在采样率为70%的情况下,本文算法信噪比38.62 dB,高于BOMP算法信噪比2.5%,高于当罚函数作为修正方法时信噪比9.8%。本文算法计算时间为214.68 s,快于BOMP算法88.12%,快于当罚函数作为修正方法时的91.1%。实验结果显示在相同的采样率的情况下,本文改进算法在重构精度和算法时间上均优于BOMP算法等其他算法。并且采样率越高,重构图像精度越来越高,重构算法时间越来越短。结论 通过实验对本文算法、BOMP重构算法等其他算法在信噪比和算法计算时间进行对比,在不同采样率下,本文算法都明显优于其他两种算法,而且在采样率仅为20.5%时,信噪比高达85.154 3 dB,重构图像比较清晰。本文算法的最大优点在于采用了分块压缩感知技术,提高图像重构效率,降低了重构时间,缺点是在图像采样率比较低的情况下,存在图像干扰块效应。接下来研究方向是如何在采样率低的情况下,高精度地还原图片,消除图像干扰块效应。  相似文献   

10.
Directed model checking is a well-established approach for detecting error states in concurrent systems. A popular variant to find shortest error traces is to apply the A\(^*\) search algorithm with distance heuristics that never overestimate the real error distance. An important class of such distance heuristics is the class of pattern database heuristics. Pattern database heuristics are built on abstractions of the system under consideration. In this paper, we propose downward pattern refinement, a systematic approach for the construction of pattern database heuristics for concurrent systems of timed automata. First, we propose a general framework for pattern databases in the context of timed automata and show that desirable theoretical properties hold for the resulting pattern database. Afterward, we formally define a concept to measure the accuracy of abstractions. Based on this concept, we propose an algorithm for computing succinct abstractions that are still accurate to produce informed pattern databases. We evaluate our approach on large and complex industrial problems. The experiments show the practical potential of the resulting pattern database heuristic.  相似文献   

11.
目的 像对直线特征匹配是计算机视觉的重要研究内容,现有这类匹配方法均存在不同程度的误匹配问题。导致此问题的主要因素包括直线检测结果没有位于图像的真正边缘处、缺乏匹配线对的一致性校验。为此本文提出一种面向像对直线特征匹配的线特征矫正与提纯方法。方法 首先提取像对的边缘特征获得二值化边缘图,通过边缘梯度图及梯度矢量图(GVF)建立梯度引力图。其次,采用直线检测方法提取像对的直线特征,并通过梯度引力图矫正直线位置。最后,采用点特征匹配结果计算像对极线,并结合直线匹配结果确定最后的局部校验特征区域,通过随机抽样一致小邻域范围内特征相似性校验直线匹配结果,从而剔除误匹配直线。结果 对一组宽基线像对进行匹配实验,与直接采用直线匹配算法获得的匹配结果相比,矫正后的匹配结果剔除了大部分误匹配线对,将匹配准确率从50%提高到84%,继续提纯该匹配结果获得了100%的匹配准确率。在另一组宽基线像对的匹配实验中,经本文方法处理后的匹配准确率提高近30%。与前两组实验相比,第3组实验的像对摄影姿态变化不大,仅在尺度上有所区别,经本文方法处理后配准率从92%提高到100%。结论 采用本文方法可以大幅提高像对直线特征匹配的准确率,同时该方法可以很容易对其他直线匹配结果进行校正与提纯,具备较高的实用性。  相似文献   

12.
Noise statistics are essential for estimation performance. In practical situations, however, a priori information of noise statistics is often imperfect. Previous work on noise statistics identification in linear systems still requires initial prior knowledge of the noise. A novel approach is presented in this paper to solve this paradox. First, we apply the H filter to obtain the system state estimates without the common assumptions about the noise in conventional adaptive filters. Then by applying state estimates obtained from the H filter, better estimates of the noise mean and covariance can be achieved, which can improve the performance of estimation. The proposed approach makes the best use of the system knowledge without a priori information with modest computation cost, which makes it possible to be applied online. Finally, numerical examples are presented to show the efficiency of this approach.  相似文献   

13.
In this paper, the problem of indoor localization in wireless networks is addressed relying on a swarm-based approach. We assume to know the positions of a few number of sensor nodes, denoted as anchor nodes (ANs), and we aim at finding the position of a target node (TN) on the basis of the estimated distances between each AN and the considered TN. Since ultra wide band (UWB) technology is particularly suited for localization purposes (owing to its remarkable time resolution), we consider a network composed of UWB devices. More precisely, we carry out an experimental investigation using the PulsOn 410 ranging and communication modules (RCMs) produced by time domain. Using four of them as ANs and one of them as TN, various topologies are considered in order to evaluate the accuracy of the proposed swarm-based localization approach, which relies on the pairwise (AN-TN) distances estimated by the RCMs. Then, we investigate how the accuracy of the proposed localization algorithm changes if we apply to the distance estimates a recently proposed stochastic correction, which is designed to reduce the distance estimation error. Our experimental results show that a good accuracy is obtained in all the considered scenarios, especially when applying the proposed swarm-based localization algorithm to the stochastically corrected distances. The obtained results are satisfying also in terms of software execution time, making the proposed approach applicable to real-time dynamic localization problems.  相似文献   

14.
在GHz级宽带信号频谱感知中,如果直接采样此宽带信号,其所需采样速率太高,超过现有的模数转换器指标;确切估计出主信号所占频段,可以进一步提高频谱利用率;因此本文基于调制宽带转换系统(MWC),提出一种基于多子带信号采样和小波变换的宽带频谱感知方法。首先利用MWC实现宽带信号的低速率采样,得到子带信号;然后提出一种噪声功率及检测门限估计方法,再利用能量检测法实现对非噪声子带的频谱感知;最后利用小波变换对信号子带进行频谱边缘检测,以确定主用户信号占用频段的确切位置信息。仿真结果验证了本文所提出的宽带频谱感知方法的可行性和有效性。  相似文献   

15.
This paper introduces a new nonparametric estimation approach inspired from quantum mechanics. Kernel density estimation associates a function to each data sample. In classical kernel estimation theory the probability density function is calculated by summing up all the kernels. The proposed approach assumes that each data sample is associated with a quantum physics particle that has a radial activation field around it. Schrödinger differential equation is used in quantum mechanics to define locations of particles given their observed energy level. In our approach, we consider the known location of each data sample and we model their corresponding probability density function using the analogy with the quantum potential function. The kernel scale is estimated from distributions of K-nearest neighbours statistics. In order to apply the proposed algorithm to pattern classification we use the local Hessian for detecting the modes in the quantum potential hypersurface. Each mode is assimilated with a nonparametric class which is defined by means of a region growing algorithm. We apply the proposed algorithm on artificial data and for the topography segmentation from radar images of terrain.  相似文献   

16.
Investors in futures market used to employ trading system which depends on reference pattern (template) to detect real-time buy or sell signal from the market. Indeed they prepare in advance a number of reference patterns that market movement might follow, and then match the current market with one of reference patterns. One popular way to prepare templates is to fix a relatively small number of them which represent possible market movements efficiently. The underlying assumption of this approach is of course that the current market movement is close enough to one of the templates. However, there is always a calculated risk that the current market is close to none of them sufficiently. In this article we investigate the issue of appropriate number of templates (or template cardinality I) in terms of profitability. We will show that one may improve profitability by increasing I and that random pattern sampling plays a key role in such case. An empirical study is done on the Korean futures market.  相似文献   

17.
Conventional MPC uses quadratic programming (QP) to minimise, on-line, a cost over n linearly constrained control moves. However, stability constraints often require the use of large n thereby increasing the on-line computation, rendering the approach impracticable in the case of fast sampling. Here, we explore an alternative that requires a fraction of the computational cost (which increases only linearly with n), and propose an extension which, in all but a small class of models, matches to within a fraction of a percent point the performance of the optimal solution obtained through QP. The provocative title of the paper is intended to point out that the proposed approach offers a very attractive alternative to QP-based MPC.  相似文献   

18.
Statistical sampling to characterize recent United States land-cover change   总被引:6,自引:0,他引:6  
The U.S. Geological Survey, in conjunction with the U.S. Environmental Protection Agency, is conducting a study focused on developing methods for estimating changes in land-cover and landscape pattern for the conterminous United States from 1973 to 2000. Eleven land-cover and land-use classes are interpreted from Landsat imagery for five sampling dates. Because of the high cost and potential effect of classification error associated with developing change estimates from wall-to-wall land-cover maps, a probability sampling approach is employed. The basic sampling unit is a 20×20 km area, and land cover is obtained for each 60×60 m pixel within the sampling unit. The sampling design is stratified based on ecoregions, and land-cover change estimates are constructed for each stratum. The sampling design and analyses are documented, and estimates of change accompanied by standard errors are presented to demonstrate the methodology. Analyses of the completed strata suggest that the sampling unit should be reduced to a 10×10 km block, and poststratified estimation and regression estimation are viable options to improve precision of estimated change.  相似文献   

19.
20.
Point clouds obtained with 3D scanners or by image-based reconstruction techniques are often corrupted with significant amount of noise and outliers. Traditional methods for point cloud denoising largely rely on local surface fitting (e.g. jets or MLS surfaces), local or non-local averaging or on statistical assumptions about the underlying noise model. In contrast, we develop a simple data-driven method for removing outliers and reducing noise in unordered point clouds. We base our approach on a deep learning architecture adapted from PCPNet, which was recently proposed for estimating local 3D shape properties in point clouds. Our method first classifies and discards outlier samples, and then estimates correction vectors that project noisy points onto the original clean surfaces. The approach is efficient and robust to varying amounts of noise and outliers, while being able to handle large densely sampled point clouds. In our extensive evaluation, both on synthetic and real data, we show an increased robustness to strong noise levels compared to various state-of-the-art methods, enabling accurate surface reconstruction from extremely noisy real data obtained by range scans. Finally, the simplicity and universality of our approach makes it very easy to integrate in any existing geometry processing pipeline. Both the code and pre-trained networks can be found on the project page ( https://github.com/mrakotosaon/pointcleannet ).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号