首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Poisson‐disk sampling is a popular sampling method because of its blue noise power spectrum, but generation of these samples is computationally very expensive. In this paper, we propose an efficient method for fast generation of a large number of blue noise samples using a small initial patch of Poisson‐disk samples that can be generated with any existing approach. Our main idea is to convolve this set of samples with another to generate our final set of samples. We use the convolution theorem from signal processing to show that the spectrum of the resulting sample set preserves the blue noise properties. Since our method is approximate, we have error with respect to the true Poisson‐disk samples, but we show both mathematically and practically that this error is only a function of the number of samples in the small initial patch and is therefore bounded. Our method is parallelizable and we demonstrate an implementation of it on a GPU, running more than 10 times faster than any previous method and generating more than 49 million 2D samples per second. We can also use the proposed approach to generate multidimensional blue noise samples.  相似文献   

2.
In this paper, we propose a method for automated screening of congenital heart diseases in children through heart sound analysis techniques. Our method relies on categorizing the pathological murmurs based on the heart sections initiating them. We show that these pathelogical murmur categories can be identified by examining the heart sound energy over specific frequency bands, which we call, Arash-Bands. To specify the Arash-Band for a category, we evaluate the energy of the heart sound over all possible frequency bands. The Arash-Band is the frequency band that provides the lowest error in clustering the instances of that category against the normal ones. The energy content of the Arash-Bands for different categories constitue a feature vector that is suitable for classification using a neural network. In order to train, and to evaluate the performance of the proposed method, we use a training data-bank, as well as a test data-bank, collectively consisting of ninety samples (normal and abnormal). Our results show that in more than 94% of cases, our method correctly identifies children with congenital heart diseases. This percentage improves to 100%, when we use the Jack-Knife validation method over all the 90 samples.  相似文献   

3.
通常情感分类模型都假定数据集中各类别样本数之间处于平衡状态,实际上数据集中不同类别样本间并不平衡.当样本间存在样本类别不平衡问题时,会导致训练结果偏向多数类样本,少数类样本分类精度不高.另外,在训练过程中,新加入样本存在贡献衰减问题,这将导致新样本对情感分类的效果影响降低,从而影响最终分类效果.针对以上问题,该文提出一...  相似文献   

4.
The use of all samples in the optimization process does not produce robust results in datasets with label noise. Because the gradients calculated according to the losses of the noisy samples cause the optimization process to go in the wrong direction. In this paper, we recommend using samples with loss less than a threshold determined during the optimization, instead of using all samples in the mini-batch. Our proposed method, Adaptive-k, aims to exclude label noise samples from the optimization process and make the process robust. On noisy datasets, we found that using a threshold-based approach, such as Adaptive-k, produces better results than using all samples or a fixed number of low-loss samples in the mini-batch. On the basis of our theoretical analysis and experimental results, we show that the Adaptive-k method is closest to the performance of the Oracle, in which noisy samples are entirely removed from the dataset. Adaptive-k is a simple but effective method. It does not require prior knowledge of the noise ratio of the dataset, does not require additional model training, and does not increase training time significantly. In the experiments, we also show that Adaptive-k is compatible with different optimizers such as SGD, SGDM, and Adam. The code for Adaptive-k is available at GitHub.  相似文献   

5.
当源域和目的域数据分布不同时,大多数机器学习方法的性能会降低。为了解决这一问题,基于域适应的思想,提出了一种新的人脸识别方法。首先计算源域样本的相对权值,删除与目的域样本相差很大的样本,降低两域之间的差异性。然后采用基于正规化的Bregman Divergence获得公共子空间,获得两域之间的共性。最后利用目的域样本目标化源域样本,充分利用目的域的特有信息。在此基础上建立的分类模型能够充分利用两域之间的共性和目的域的特性,实现对目的域的准确分类。为了评估本方法的性能,在多个数据集上测试实验。实验结果证明,该方法的性能与其他几种方法相比均有所提高。  相似文献   

6.
We provide a simple method that extracts an isosurface that is manifold and intersection‐free from a function over an arbitrary octree. Our method samples the function dual to minimal edges, faces, and cells, and we show how to position those samples to reconstruct sharp and thin features of the surface. Moreover, we describe an error metric designed to guide octree expansion such that flat regions of the function are tiled with fewer polygons than curved regions to create an adaptive polygonalization of the isosurface. We then show how to improve the quality of the triangulation by moving dual vertices to the isosurface and provide a topological test that guarantees we maintain the topology of the surface. While we describe our algorithm in terms of extracting surfaces from volumetric functions, we also show that our algorithm extends to generating manifold level sets of co‐dimension 1 of functions of arbitrary dimension.  相似文献   

7.
Anisotropic noise samples   总被引:1,自引:0,他引:1  
We present a practical approach to generate stochastic anisotropic samples with Poisson-disk characteristic over a two-dimensional domain. In contrast to isotropic samples, we understand anisotropic samples as non-overlapping ellipses whose size and density match a given anisotropic metric. Anisotropic noise samples are useful for many visualization and graphics applications. The spot samples can be used as input for texture generation, e.g., line integral convolution (LIC), but can also be used directly for visualization. The definition of the spot samples using a metric tensor makes them especially suitable for the visualization of tensor fields that can be translated into a metric. Our work combines ideas from sampling theory and mesh generation. To generate these samples with the desired properties we construct a first set of non-overlapping ellipses whose distribution closely matches the underlying metric. This set of samples is used as input for a generalized anisotropic Lloyd relaxation to distribute noise samples more evenly. Instead of computing the Voronoi tessellation explicitly, we introduce a discrete approach which combines the Voronoi cell and centroid computation in one step. Our method supports automatic packing of the elliptical samples, resulting in textures similar to those generated by anisotropic reaction-diffusion methods. We use Fourier analysis tools for quality measurement of uniformly distributed samples. The resulting samples have nice sampling properties, for example, they satisfy a blue noise property where low frequencies in the power spectrum are reduced to a minimum.  相似文献   

8.
In this paper we present the first practical method for importance sampling functions represented as spherical harmonics (SH). Given a spherical probability density function (PDF) represented as a vector of SH coefficients, our method warps an input point set to match the target PDF using hierarchical sample warping. Our approach is efficient and produces high quality sample distributions. As a by-product of the sampling procedure we produce a multi-resolution representation of the density function as either a spherical mip-map or Haar wavelet. By exploiting this implicit conversion we can extend the method to distribute samples according to the product of an SH function with a spherical mip-map or Haar wavelet. This generalization has immediate applicability in rendering, e.g., importance sampling the product of a BRDF and an environment map where the lighting is stored as a single high-resolution wavelet and the BRDF is represented in spherical harmonics. Since spherical harmonics can be efficiently rotated, this product can be computed on-the-fly even if the BRDF is stored in local-space. Our sampling approach generates over 6 million samples per second while significantly reducing precomputation time and storage requirements compared to previous techniques.  相似文献   

9.
木材化学分类法的研究较少。通过木材的化学成分和化学计量学方法,从分子的角度鉴别珍贵木材,具有重要意义。本文通过GC-FID实验,采集阔叶黄檀等5种18批次的红木样本的色谱数据,所建立实验方法重现性好。将所得色谱数据,进行色谱峰对齐和自标度化预处理,然后PCA投影。12个建模样本被分成4类,与各样本已知的植物学分类一致。以所建立的分类方法(即PCA投影空间),识别其余6个待鉴别样本,结果准确达到分离聚类。本方法利用现代分析仪器和模式识别法实现了对红木的分类和识别,为珍贵木材的化学分类鉴别法提供理论依据。  相似文献   

10.
Developing methods for designing good classifiers from labeled samples whose distribution is different from that of test samples is an important and challenging research issue in the fields of machine learning and its application. This paper focuses on designing semi-supervised classifiers with a high generalization ability by using unlabeled samples drawn by the same distribution as the test samples and presents a semi-supervised learning method based on a hybrid discriminative and generative model. Although JESS-CM is one of the most successful semi-supervised classifier design frameworks based on a hybrid approach, it has an overfitting problem in the task setting that we consider in this paper. We propose an objective function that utilizes both labeled and unlabeled samples for the discriminative training of hybrid classifiers and then expect the objective function to mitigate the overfitting problem. We show the effect of the objective function by theoretical analysis and empirical evaluation. Our experimental results for text classification using four typical benchmark test collections confirmed that with our task setting in most cases, the proposed method outperformed the JESS-CM framework. We also confirmed experimentally that the proposed method was useful for obtaining better performance when classifying data samples into either known or unknown classes, which were included in given labeled samples or not, respectively.  相似文献   

11.
We present a new software ray tracing solution that efficiently computes visibilities in dynamic scenes. We first introduce a novel scene representation: ray-aligned occupancy map array (ROMA) that is generated by rasterizing the dynamic scene once per frame. Our key contribution is a fast and low-divergence tracing method computing visibilities in constant time, without constructing and traversing the traditional intersection acceleration data structures such as BVH. To further improve accuracy and alleviate aliasing, we use a spatiotemporal scheme to stochastically distribute the candidate ray samples. We demonstrate the practicality of our method by integrating it into a modern real-time renderer and showing better performance compared to existing techniques based on distance fields (DFs). Our method is free of the typical artifacts caused by incomplete scene information, and is about 2.5×–10× faster than generating and tracing DFs at the same resolution and equal storage.  相似文献   

12.
ContextTopic models such as probabilistic Latent Semantic Analysis (pLSA) and Latent Dirichlet Allocation (LDA) have demonstrated success in mining software repository tasks. Understanding software change messages described by the unstructured nature-language text is one of the fundamental challenges in mining these messages in repositories.ObjectiveWe seek to present a novel automatic change message classification method characterized by semi-supervised topic semantic analysis.MethodIn this work, we present a semi-supervised LDA based approach to automatically classify change messages. We use domain knowledge of software changes to make labeled samples which are added to build the semi-supervised LDA model. Next, we verify the cross-project analysis application of our method on three open-source projects. Our method has two advantages over existing software change classification methods: First of all, it mitigates the issue of how to set the appropriate number of latent topics. We do not have to choose the number of latent topics in our method, because it corresponds to the number of class labels. Second, this approach utilizes the information provided by the label samples in the training set.ResultsOur method automatically classified about 85% of the change messages in our experiment and our validation survey showed that 70.56% of the time our automatic classification results were in agreement with developer opinions.ConclusionOur approach automatically classifies most of the change messages which record the cause of the software change and the method is applicable to cross-project analysis of software change messages.  相似文献   

13.
We present an efficient technique for out-of-core multi-resolution construction and high quality interactive visualization of massive point clouds. Our approach introduces a novel hierarchical level of detail (LOD) organization based on multi-way kd-trees, which simplifies memory management and allows control over the LOD-tree height. The LOD tree, constructed bottom up using a fast high-quality point simplification method, is fully balanced and contains all uniformly sized nodes. To this end, we introduce and analyze three efficient point simplification approaches that yield a desired number of high-quality output points. For constant rendering performance, we propose an efficient rendering-on-a-budget method with asynchronous data loading, which delivers fully continuous high quality rendering through LOD geo-morphing and deferred blending. Our algorithm is incorporated in a full end-to-end rendering system, which supports both local rendering and cluster-parallel distributed rendering. The method is evaluated on complex models made of hundreds of millions of point samples.  相似文献   

14.
Data may be afflicted with uncertainty. Uncertain data may be shown by an interval value or in general by a fuzzy set. A number of classification methods have considered uncertainty in features of samples. Some of these classification methods are extended version of the support vector machines (SVMs), such as the Interval‐SVM (ISVM), Holder‐ISVM and Distance‐ISVM, which are used to obtain a classifier for separating samples whose features are interval values. In this paper, we extend the SVM for robust classification of linear/non‐linear separable data whose features are fuzzy numbers. The support of such training data is shown by a hypercube. Our proposed method tries to obtain a hyperplane (in the input space or in a high‐dimensional feature space) such that the nearest point of the hypercube of each training sample to the hyperplane is separated with the widest symmetric margin. This strategy can reduce the misclassification probability of our proposed method. Our experimental results on six real data sets show that the classification rate of our novel method is better than or equal to the classification rate of the well‐known SVM, ISVM, Holder‐ISVM and Distance‐ISVM for all of these data sets.  相似文献   

15.
目的 当前,目标跟踪问题常常会通过在线学习、检测的方法来解决。针对在线学习过程中,分类器训练需要花费大量时间以提高其识别准确率的问题,提出使用Adaboost算法级联弱分类器,在训练一定帧数后仅进行检测的方法来达到实时和准确的折中。方法 首先针对跟踪问题简化了haar特征,以降低特征计算量。同时考虑到经典的Adaboost算法可能并不适合跟踪过程中存在的正负样本不均衡问题,提出在样本权重更新公式中引入一个新的调整因子项并且结合代价敏感学习来提高目标识别率的方法。最终给出使用简化的haar特征作为描述子,改进的代价敏感Adaboost作为分类器的目标跟踪算法。结果 对20组视频进行跟踪实验,本文算法的平均代表准确率高于压缩跟踪算法约26%,高于原始代价敏感算法约11%;本文算法的视频处理平均帧率高于压缩跟踪算法约38%。结论 本文提出的新代价敏感Adaboost算法对目标的识别、跟踪具有较高的准确率及较快的处理速度,并具有一定的抗干扰能力。特别对人等非刚性目标能够进行较好跟踪。  相似文献   

16.
Pedestrian detection is a fundamental problem in video surveillance and has achieved great progress in recent years. However, training a generic detector performing well in a great variety of scenes has proved to be very difficult. On the other hand, exhausting manual labeling efforts for each specific scene to achieve high accuracy of detection is not acceptable especially for video surveillance applications. To alleviate the manual labeling efforts without scarifying accuracy of detection, we propose a transfer learning framework based on sparse coding for pedestrian detection. In our method, generic detector is used to get the initial target samples, and then several filters are used to select a small part of samples (called as target templates) from the initial target samples which we are very sure about their labels and confidence values. The relevancy between source samples and target templates and the relevancy between target samples and target templates are estimated by sparse coding and later used to calculate the weights for source samples and target samples. By adding the sparse coding-based weights to all these samples during re-training process, we can not only exclude outliers in the source samples, but also tackle the drift problem in the target samples, and thus get a well scene-specific pedestrian detector. Our experiments on two public datasets show that our trained scene-specific pedestrian detector performs well and is comparable with the detector trained on a large number of training samples manually labeled from the target scene.  相似文献   

17.
Graph determines the performance of graph-based semi-supervised classification. In this paper, we investigate how to construct a graph from multiple clusterings and propose a method called Semi-Supervised Classification using Multiple Clusterings (SSCMC in short). SSCMC firstly projects original samples into different random subspaces and performs clustering on the projected samples. Then, it constructs a graph by setting an edge between two samples if these two samples are clustered in the same cluster for each clustering. Next, it combines these graphs into a composite graph and incorporates the resulting composite graph with a graph-based semi-supervised classifier based on local and global consistency. Our experimental results on two publicly available facial images show that SSCMC not only achieves higher accuracy than other related methods, but also is robust to input parameters.  相似文献   

18.
为实现贝类腹泻性毒素的现场快速检测,本论文设计了一种现场高通量前处理装置及基于移动终端的快速检测系统。通过使用酶联免疫反应技术对腹泻性贝毒进行显色反应,结合基于移动终端的快速检测装置,采集显色反应结果图像,实现准确快速的检测结果分析。采用所设计的前处理装置与实验室手工前处理标准方法对比,其回收率分别为89%和93%,结果表明,本文设计的前处理装置能够满足现场检测分析的需求。通过加标样品检测,采用移动终端的快速检测系统测量标准差为0.13,平均回收率89.5%。实验结果表明,该检测系统的准确性和重复性满足实际检测的需求。最后,针对实际样品,与酶标仪检测结果相比较,表明了所设计的现场高通量前处理装置及快速检测系统能达到对贝类腹泻性毒素现场快速精确检测,为贝类腹泻性毒素现场检测提供了新的方法和仪器。  相似文献   

19.
针对神经网络分类器训练时间长、泛化能力差的问题,提出了一种基于动态数据约简的神经网络分类器训练方法(DDR)。该训练方法在训练过程中赋给每个训练样本一个权重值作为样本的重要性度量,依据每次网络迭代训练样本的分类错误率动态更新每个训练样本的权重值,之后依据样本的权重值来约简训练样本,从而增加易错分类的边界样本比重,减少冗余核样本的作用。数值实验表明,基于权重的动态数据约简神经网络训练方法不仅大幅缩短了网络的训练时间,而且还能够显著提升网络的分类泛化能力。  相似文献   

20.
We propose a parameter‐free method to recover manifold connectivity in unstructured 2D point clouds with high noise in terms of the local feature size. This enables us to capture the features which emerge out of the noise. To achieve this, we extend the reconstruction algorithm HNN‐Crust , which connects samples to two (noise‐free) neighbours and has been proven to output a manifold for a relaxed sampling condition. Applying this condition to noisy samples by projecting their k‐nearest neighbourhoods onto local circular fits leads to multiple candidate neighbour pairs and thus makes connecting them consistently an NP‐hard problem. To solve this efficiently, we design an algorithm that searches that solution space iteratively on different scales of k. It achieves linear time complexity in terms of point count plus quadratic time in the size of noise clusters. Our algorithm FitConnect extends HNN‐Crust seamlessly to connect both samples with and without noise, performs as local as the recovered features and can output multiple open or closed piecewise curves. Incidentally, our method simplifies the output geometry by eliminating all but a representative point from noisy clusters. Since local neighbourhood fits overlap consistently, the resulting connectivity represents an ordering of the samples along a manifold. This permits us to simply blend the local fits for denoising with the locally estimated noise extent. Aside from applications like reconstructing silhouettes of noisy sensed data, this lays important groundwork to improve surface reconstruction in 3D. Our open‐source algorithm is available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号