首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A non-parametric, unsupervised learning technique is described. The technique makes use of a relation matrix to classify binary pattern vectors presented in random sequence. As each vector is classified, the elements of the matrix are adjusted in such a way as to reinforce the latest class assignment. A preliminary analysis shows that this process produces decision surfaces of a reasonable form. Extensive experiments with both simulated and real-world data confirm that the method performs very well in many circumstances.  相似文献   

2.
For many real world problems we must perform classification under widely varying amounts of computational resources. For example, if asked to classify an instance taken from a bursty stream, we may have anywhere from several milliseconds to several minutes to return a class prediction. For such problems an anytime algorithm may be especially useful. In this work we show how we convert the ubiquitous nearest neighbor classifier into an anytime algorithm that can produce an instant classification, or if given the luxury of additional time, can continue computations to increase classification accuracy. We demonstrate the utility of our approach with a comprehensive set of experiments on data from diverse domains. We further show the utility of our work with two deployed applications, in classifying and counting fish, and in classifying insects.  相似文献   

3.
A common approach in structural pattern classification is to define a dissimilarity measure on patterns and apply a distance-based nearest-neighbor classifier. In this paper, we introduce an alternative method for classification using kernel functions based on edit distance. The proposed approach is applicable to both string and graph representations of patterns. By means of the kernel functions introduced in this paper, string and graph classification can be performed in an implicit vector space using powerful statistical algorithms. The validity of the kernel method cannot be established for edit distance in general. However, by evaluating theoretical criteria we show that the kernel functions are nevertheless suitable for classification, and experiments on various string and graph datasets clearly demonstrate that nearest-neighbor classifiers can be outperformed by support vector machines using the proposed kernel functions.  相似文献   

4.
This paper proposes an efficient solution to the problem of per-pixel classification of textured images with multichannel Gabor wavelet filters based on a selection scheme that automatically determines a subset of prototypes that characterize each texture class. Results with Brodatz compositions and outdoor images, and comparisons with alternative classification techniques are presented.  相似文献   

5.
Before implementing a pattern recognition algorithm, a rational step is to estimate its validity by bounding the probability of error. The ability to make such an estimate impacts crucially on the satisfactoriness of the particular features used, on the number of samples required to train and test the system and on the overall paradigm. This study develops statistical upper and lower bounds for estimating the probability of error, in the one-dimensional case. The bounds are distribution-free except for requiring the existence of the relevant statistics and can be evaluated easily by hand or by computer. Many of the results are also applicable to other problems involving the estimation of an arbitrary distribution of a random variable. Some multidimensional generalizations may be feasible.  相似文献   

6.
多光谱影像中存在着光谱异质性、细节干扰及地物拓扑结构复杂等特点,给遥感分类带来诸多不利影响。针对此类问题,提出一种新的非参数密度估计的多水平集分类方法:将Parzen窗非参数密度估计方法集成到多相位水平集框架中,用以提高复杂场景中样本概率密度估计的准确性,并增强抗干扰能力;此外,基于Gabor小波滤波器导出的纹理特征构造了一个新的能量项以增强模型的纹理分析能力。实验对比及分析验证了所提出的模型在仅有少量先验知识的条件下,可有效地改善遥感图像分类的质量。  相似文献   

7.
图像的带参二值化理论和技术及其应用*   总被引:2,自引:0,他引:2  
提出的图像带参二值化理论、技术和方法具有很强的普适性,而且还能用于基于人类视觉对比度分辨限制的底层图像的挖掘和隐藏.当然,图像的带参二值化理论、技术和方法是提出的灰度/色度谱分级平坦化理论和技术提供的图像的灰度/色度信息作支撑.  相似文献   

8.
Random sets form a well-established, general tool for modelling epistemic uncertainty in engineering. They can be seen as encompassing probability theory, fuzzy sets and interval analysis. Random set models for data uncertainty are typically used to obtain robust upper and lower bounds for the reliability of structures in engineering models. The goal of this paper is to show how random set models can be constructed from measurement data by non-parametric methods using inequalities of Tchebycheff type. Relations with sensitivity analysis will also be high-lighted. We demonstrate the application of the methods in an FE-model for the excavation of a cantilever sheet pile wall.  相似文献   

9.
Engineering Asset Management (EAM) emphasizes on achieving sustainable business outcomes and competitive advantages by applying systematic and risk-based processes to decisions concerning an organization's physical assets. Nowadays, there is no specific method to evaluate performance of EAM and lack of benchmark to rank performance. To fill this gap, an improved density and distance-based clustering approach is proposed. The proposed approach is intelligent and efficient. It has largely simplified the current evaluating method so that the commitment in resources for manual data analyzing and performance ranking can be significantly reduced. Moreover, the proposed approach provides a basis on benchmarking for measuring and ranking the performance in Engineering Asset Management (EAM). Additionally, by using the intelligent approach, companies can avoid to pay expensive consultant fees for inviting external consultancy company to provide the necessary EAM auditing and performance benchmarking.  相似文献   

10.
Shared memory multiprocessors offer a relatively simple programming model and are suitable for a wide variety of parallel applications. Unfortunately, shared memory multiprocessors are not as scalable as distributed memory multiprocessors owing to memory and switch contentions that can result in the formation of hot spots. Spinning on synchronization variables appears to be the main culprit behind the formation of hot spots, affecting system scalability adversely. The purpose of this paper is to address the issue of performing barrier synchronization efficiently in large-scale shared memory multiprocessors. We propose a very simple design for a hardware barrier synchronizer that has the characteristics of what one would call an ideal barrier synchronizer. In particular, the proposed barrier synchronizer allows fast barrier synchronization without injecting spin traffic to create hot spots and can be reused as soon as it has completed a barrier synchronization. We also show that by augmenting this barrier synchronizer with a few gates, it can be used to perform dynamic barrier synchronization, where neither the number, nor the exact identity of processors participating in the barrier is known a priori. We will also show that a low-latency barrier synchronizer can be used not only for high-speed barrier synchronization but also, very profitably, for implementing software combining (allowing distributed hot spot accessing), for data and producer-consumer type synchronization and for the implementation of a variety of other useful applications. A high-speed barrier synchronizer can also be used to implement highly concurrent data structures and will also allow a MIMD (Multiple Instruction streams, Multiple Data streams) system to be effectively operated in a SIMD (Single Instruction stream, Multiple Data streams)-style mode, giving rise to a number of potential advantages. We use simulations to confirm that our proposed synchronizers and their applications outperform the existing barrier synchronization schemes.  相似文献   

11.
In the past several years, various ontologies and terminologies such as the Gene Ontology have been developed to enable interoperability across multiple diverse medical information systems. They provide a standard way of representing terms and concepts thereby supporting easy transmission and interpretation of data for various applications. However, with their growing utilization, not only has the number of available ontologies increased considerably, but they are also becoming larger and more complex to manage. Toward this end, a growing body of work is emerging in the area of modular ontologies where the emphasis is on either extracting and managing "modules" of an ontology relevant to a particular application scenario (ontology decomposition) or developing them independently and integrating into a larger ontology (ontology composition). In this paper, we investigate state-of-the-art approaches in modular ontologies focusing on techniques that are based on rigorous logical formalisms as well as well-studied graph theories. We analyze and compare how such approaches can be leveraged in developing tools and applications in the biomedical domain. We conclude by highlighting some of the limitations of the modular ontology formalisms and put forward additional requirements to steer their future development.  相似文献   

12.
There has been relatively little work on privacy preserving techniques for distance based mining. The most widely used ones are additive perturbation methods and orthogonal transform based methods. These methods concentrate on privacy protection in the average case and provide no worst case privacy guarantee. However, the lack of privacy guarantee makes it difficult to use these techniques in practice, and causes possible privacy breach under certain attacking methods. This paper proposes a novel privacy protection method for distance based mining algorithms that gives worst case privacy guarantees and protects the data against correlation-based and transform-based attacks. This method has the following three novel aspects. First, this method uses a framework to provide theoretical bound of privacy breach in the worst case. This framework provides easy to check conditions that one can determine whether a method provides worst case guarantee. A quick examination shows that special types of noise such as Laplace noise provide worst case guarantee, while most existing methods such as adding normal or uniform noise, as well as random projection method do not provide worst case guarantee. Second, the proposed method combines the favorable features of additive perturbation and orthogonal transform methods. It uses principal component analysis to decorrelate the data and thus guards against attacks based on data correlations. It then adds Laplace noise to guard against attacks that can recover the PCA transform. Third, the proposed method improves accuracy of one of the popular distance-based classification algorithms: K-nearest neighbor classification, by taking into account the degree of distance distortion introduced by sanitization. Extensive experiments demonstrate the effectiveness of the proposed method.  相似文献   

13.
Three-dimensional (3D) reconstruction techniques have been used to obtain the 3D representations of objects in civil engineering in the form of point cloud models, mesh models and geometric models more often than ever, among which, point cloud models are the basis. In order to clarify the status quo of the research and application of the techniques in civil engineering, literature retrieval is implemented by using major literature databases in the world and the result is summarized by analyzing the abstracts or the full papers when required. First, the research methodology is introduced, and the framework of 3D reconstruction techniques is established. Second, 3D reconstruction techniques for generating point clouds and processing point clouds along with the corresponding algorithms and methods are reviewed respectively. Third, their applications in reconstructing and managing construction sites and reconstructing pipelines of Mechanical, Electrical and Plumbing (MEP) systems, are presented as typical examples, and the achievements are highlighted. Finally, the challenges are discussed and the key research directions to be addressed in the future are proposed. This paper contributes to the knowledge body of 3D reconstruction in two aspects, i.e. summarizing systematically the up-to-date achievements and challenges for the applications of 3D reconstruction techniques in civil engineering, and proposing key future research directions to be addressed in the field.  相似文献   

14.
Hub location problem (HLP) is a relatively new extension of classical facility location problems. Hubs are facilities that work as consolidation, connecting, and switching points for flows between stipulated origins and destinations. While there are few review papers on hub location problems, the most recent one (Alumur and Kara, 2008. Network hub location problems: The state of the art. European Journal of Operational Research, 190, 1–21) considers solely studies on network-type hub location models prior to early 2007. Therefore, this paper focuses on reviewing the most recent advances in HLP from 2007 up to now. In this paper, a review of all variants of HLPs (i.e., network, continuous, and discrete HLPs) is provided. In particular, mathematical models, solution methods, main specifications, and applications of HLPs are discussed. Furthermore, some case studies illustrating real-world applications of HLPs are briefly introduced. At the end, future research directions and trends will be presented.  相似文献   

15.
16.
针对7个不同产地煤的煤质特性参数(X,Y,G)和配比,在数据结构类型未知的情况下,本工作融合了基于线性变换的PCA、PLS和基于非线性变换的Lmap方法,即在各算法的空间变换基础上,同时考虑每个二维投影的分类效果(计算各二维投影图上的各类点的类间距和类内距),从中获得一个各类之间分类效果最好的最佳投影。由此定性研究了上述煤质特性参数和配比对焦炭机械强度和推焦电流的影响。然后在良好分类效果基础上,用支撑向量回归方法分别总结了该组参数与其对应的目标之间的定量关系,建立了预测模型,得到了准确的回归预测结果,由此可有目的地调节配煤比例以更好地提高炼焦效果。  相似文献   

17.
In this paper, the problems of fuzzy binary relations on fuzzy n-cell number space and their applications are investigated. Firstly, we have defined some fuzzy approximation relations on fuzzy n-cell number space, and studied their properties. Secondly, as application, we have developed an algorithmic version of classification in an imprecise or uncertain environment by using the fuzzy approximation relations. Practical examples are provided to show the application and rationality of the proposed techniques.  相似文献   

18.
19.
We apply new contour features: (1) Point features by computing the convexity and curvature in small contour neighborhoods. (2) Segment features by segmenting the contour into convex, concave and straight segments, and computing length and curvature measures for each segment. (3) Global features by computing the mean, maximum and minimum of all point and segment features. Features can be extracted from noisy contours with convex, concave and straight parts, but also from completely convex ones, for the purpose of shape analysis or identification (ID) tasks. Using only four global features, a nearest-mean classifier yielded a perfect ID rate of 100% on diatoms with minute differences in shape, which are difficult to identify even for diatomists.  相似文献   

20.
Advances and applications on microfluidic velocimetry techniques   总被引:3,自引:2,他引:1  
The development and analysis of the performance of microfluidic components for lab-on-a-chip devices are becoming increasingly important because microfluidic applications are continuing to expand in the fields of biology, nanotechnology, and manufacturing. Therefore, the characterization of fluid behavior at the scales of micro- and nanometer levels is essential. A variety of microfluidic velocimetry techniques like micron-resolution Particle Image Velocimetry (μPIV), particle-tracking velocimetry (PTV), and others have been developed to characterize such microfluidic systems with spatial resolutions on the order of micrometers or less. This article discusses the fundamentals of established velocimetry techniques as well as the technical applications found in literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号