共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the problem of measuring the similarity or distance between two finite sets of points in a metric space, and computing
the measure. This problem has applications in, e.g., computational geometry, philosophy of science, updating or changing theories,
and machine learning. We review some of the distance functions proposed in the literature, among them the minimum distance
link measure, the surjection measure, and the fair surjection measure, and supply polynomial time algorithms for the computation
of these measures. Furthermore, we introduce the minimum link measure, a new distance function which is more appealing than
the other distance functions mentioned. We also present a polynomial time algorithm for computing this new measure. We further
address the issue of defining a metric on point sets. We present the metric infimum method that constructs a metric from any
distance functions on point sets. In particular, the metric infimum of the minimum link measure is a quite intuitive. The
computation of this measure is shown to be in NP for a broad class of instances; it is NP-hard for a natural problem class.
Received: 1 July 1994 / 9 November 1995 相似文献
2.
Distance field manipulation of surface models 总被引:7,自引:0,他引:7
A surface manipulation technique that uses distance fields-scalar fields derived geometrically from surface models-to combine, modify, and analyze surfaces is presented. It is intended for application to complex models arising in scientific visualization. Computing distance from single triangles is discussed, and an optimized algorithm for computing the distance field from an entire closed surface is built. The use of the fields for surface removal, interpolation and blending is examined. The strength of the approach is that it lets simple 3D algorithms substitute for potentially very complex 2D methods 相似文献
3.
粗糙集的不确定性度量是粗糙集理论的重要研究内容之一。结合模糊理论和粒计算理论改进了粗糙集的不确定性度量方法。通过集合的相对知识粒度及边界熵给出了粗糙集的粗糙性度量函数与模糊性度量函数,随着近似空间知识粒的细分,粗糙集的粗糙度与模糊度均满足单调递减的性质。利用矩阵理论提出了易于实现的粗糙性度量与模糊性度量的矩阵算法。 相似文献
4.
M Viceconti C Zannoni D Testi A Cappello 《Computer methods and programs in biomedicine》1999,59(3):159-166
In modelling applications such as custom-made implants design is useful to have a surface representation of the anatomy of bones rather than the voxel-based representation generated by tomography systems. A voxel-to-surface conversion process is usually done by a 2D segmentation of the images stack. However, other methods allow a direct 3D segmentation of the CT or MRI data set. In the present work, two of these methods, namely the Standard Marching Cube (SMC) and the Discretized Marching Cube (DMC) algorithms, were compared in terms of local accuracy when used to reconstruct the geometry of a human femur. The SMC method was found to be more accurate than the DMC method. The SMC method was capable of reconstructing the inner and outer geometry of a human femur with a peak error lower than 0.9 mm and an average error comparable to the pixel size (0.3 mm). However, the large number of triangles generated by the algorithm may limit its adoption in many modelling applications. The peak error of the DMC algorithm was 1.6 mm but it produced approximately 70% less triangles than the SMC method. From the results of this study, it may be concluded that three dimensional segmentation algorithms are useful not only in visualisation applications but also in the creation of geometry models. 相似文献
5.
6.
7.
Distance and similarity measures for hesitant fuzzy sets 总被引:4,自引:0,他引:4
In this paper, we propose a variety of distance measures for hesitant fuzzy sets, based on which the corresponding similarity measures can be obtained. We investigate the connections of the aforementioned distance measures and further develop a number of hesitant ordered weighted distance measures and hesitant ordered weighted similarity measures. They can alleviate the influence of unduly large (or small) deviations on the aggregation results by assigning them low (or high) weights. Several numerical examples are provided to illustrate these distance and similarity measures. 相似文献
8.
权双燕 《计算机工程与应用》2008,44(26):65-67
介绍了Vague集距离测度公理化的定义,引入了一些新的距离测度并给出了这些距离测度性质的证明。讨论了相似测度与距离测度之间的关系。最后,指出Vague集距离测度在模式识别中的应用。 相似文献
9.
Z -buffer technique for fast and efficient shadow generation. Volumetric data contain information about the grid points only.
Such data do not provide surface information that could be projected immediately onto the shadow map. To solve this problem,
we have implemented two techniques. The first uses a modified adaptive version of the well-known marching cubes algorithm
for the special characteristics of medical data sets. The algorithm uses material properties for a precise representation
of object boundaries, generating volumetric objects quickly and effectively. There are two representations of the same data
set: we use a view-independent approximation to display shadows and the original representation of the volume for object visualization
in full precision. The second algorithm uses a ray-tracing approach to create shadow maps. The same routine is used for object
rendering, but is restricted to depth-value generation. Semitransparent objects are handled by storing an intensity profile
in addition to the depth value. 相似文献
10.
11.
支持向量机是最有效的分类技术之一,具有很高的分类精度和良好的泛化能力,但其应用于大型数据集时的训练过程还是非常复杂。对此提出了一种基于单类支持向量机的分类方法。采用随机选择算法来约简训练集,以达到提高训练速度的目的;同时,通过恢复超球体交集中样本在原始数据中的邻域来保证支持向量机的分类精度。实验证明,该方法能在较大程度上减小计算复杂度,从而提高大型数据集中的训练速度。 相似文献
12.
Many real data sets in databases may vary dynamically. With such data sets, one has to run a knowledge acquisition algorithm repeatedly in order to acquire new knowledge. This is a very time-consuming process. To overcome this deficiency, several approaches have been developed to deal with dynamic databases. They mainly address knowledge updating from three aspects: the expansion of data, the increasing number of attributes and the variation of data values. This paper focuses on attribute reduction for data sets with dynamically varying data values. Information entropy is a common measure of uncertainty and has been widely used to construct attribute reduction algorithms. Based on three representative entropies, this paper develops an attribute reduction algorithm for data sets with dynamically varying data values. When a part of data in a given data set is replaced by some new data, compared with the classic reduction algorithms based on the three entropies, the developed algorithm can find a new reduct in a much shorter time. Experiments on six data sets downloaded from UCI show that the algorithm is effective and efficient. 相似文献
13.
At Los Alamos National Laboratory, geoscientists have assembled and integrated 30 geological, geophysical, and geochemical data sets with four Landsat bands for the Montrose 1° × 2° quadrangle, Colorado. Three graphical displays were developed to determine if visual analysis of the data facilitated interpretation. Two displays project the data spatially: gray-level maps project values of a single data set, and three-color overlays project the values of three data sets simultaneously. The third display, a three-dimensional plot, graphs three data sets and allows examination of relationships in parameter space. Two examples illustrate the potential applications of the display techniques. Uranium in sediments, uranium in waters, and equivalent uranium each provide unique information about uranium distribution in the quadrangle. However, the combined data convey more information than each data set separately. Copper, lead, and zinc displays allow identification of all the basemetal districts and convey information about the geochemical character of the deposits. Visual displays greatly increase efficiency of analysis and interpretability of diverse geologic data sets. 相似文献
14.
We present algorithms for computing accurate moments of solid models that are represented using multiple trimmed NURBS surfaces and triangles. Our algorithms make use of programmable Graphics Processing Units (GPUs) to accelerate the moment computations. For NURBS surfaces, we evaluate the surface coordinates and normals accurately, with theoretical bounds, using our GPU NURBS evaluator. We have developed a framework that makes use of this data to evaluate surface integrals of trimmed NURBS surfaces in real time. Since typical solid models also contain flat planes that are usually tessellated into triangles, our framework also includes GPU acceleration of the moment contributions of such flat faces. Using our framework, we can compute volume and moments of solid models with theoretical guarantees. Our algorithms support local geometry changes, which is useful for providing interactive feedback to the designer while the solid model is being designed. We can compute the center of mass and check for stability of the solid model interactively. Other applications of such real-time moment computation include deformation modeling, animation, and physically based simulations. 相似文献
15.
Standard texture mapping hardware enables rapid rendering of color mapped surfaces with interpolated surface shading. New algorithms extend this to bump mapping, Phong shading, and reflection mapping. We first introduce the bidirectional reflectance function we wish to optimize, split into diffuse, specular and environment terms. Casting the diffuse term as a table lookup, we introduce lighting tables and efficient ways to compute them for distant lights. We also revisit the geometry of bump mapping, extending Blinn's (1978) results. We consider caching intermediate results for rendering animated rigid bodies, generalizing this to animated surfaces using a technique called parametric rasterization. Finally, we describe efficient reflection mapping and discuss implications for bump-mapped surfaces. We present a fast method for rendering Phong highlights and discuss a special case of a planar surface with simulated water ripples 相似文献
16.
Carlos Ordonez Naveen Mohanam Carlos Garcia-Alvarado 《Distributed and Parallel Databases》2014,32(3):377-403
Parallel processing is essential for large-scale analytics. Principal Component Analysis (PCA) is a well known model for dimensionality reduction in statistical analysis, which requires a demanding number of I/O and CPU operations. In this paper, we study how to compute PCA in parallel. We extend a previous sequential method to a highly parallel algorithm that can compute PCA in one pass on a large data set based on summarization matrices. We also study how to integrate our algorithm with a DBMS; our solution is based on a combination of parallel data set summarization via user-defined aggregations and calling the MKL parallel variant of the LAPACK library to solve Singular Value Decomposition (SVD) in RAM. Our algorithm is theoretically shown to achieve linear speedup, linear scalability on data size, quadratic time on dimensionality (but in RAM), spending most of the time on data set summarization, despite the fact that SVD has cubic time complexity on dimensionality. Experiments with large data sets on multicore CPUs show that our solution is much faster than the R statistical package as well as solving PCA with SQL queries. Benchmarking on multicore CPUs and a parallel DBMS running on multiple nodes confirms linear speedup and linear scalability. 相似文献
17.
18.
Ownership sets are fundamental to the partitioning of program computations across processors by the owner-computes rule. These sets arise due to the mapping of arrays onto processors. In this paper, we focus on how ownership sets can be efficiently determined in the context of the HPF language and show how the structure of these sets can be symbolically characterized in the presence of arbitrary array alignment and array distribution directives. Our starting point is a system of equalities and inequalities due to Ancourt et al. (1995) that captures the array mapping problem in HPF. We arrive at a refined system that enables us to efficiently solve for the ownership set using the Fourier-Motzkin Elimination technique and that requires the course vector as the only auxiliary vector. The formulation makes it possible to enumerate the elements of the ownership set exactly once, a feature that is very beneficial when such sets are applied to handle DO loops qualified by HPF's INDEPENDENT directive. We develop important and general properties pertaining to HPF alignments and distributions and show how they can be used to eliminate redundant communication due to array replication. Polynomial-time schemes that determine whether the ownership set of a particular processor, with respect to some array, is the empty set or whether the ownership set of every processor, with respect to some array, is the empty set, are presented. We show how distribution directives with unspecified processor meshes can be efficiently handled at compile time. We also show how to avoid the generation of communication code when pairs of array references are ultimately mapped onto the same processors. Experimental data demonstrating the improved code performance that the latter optimization enables is presented and discussed 相似文献
19.
Finite field multiplication is a crucial building block for cryptography, especially the elliptic curve public key cryptosystem. Recently, various algorithms for efficient finite field multiplication over devices whose resources are extremely constrained have been proposed. However, most of these proposals only take speed optimization into account, but they do not pay much attention to optimization of memory usage. In this paper, we propose a multiplication algorithm on $F_{2^{m}}$ , which minimizes the RAM requirement by rescheduling operation sequences. According to our experimental results on the ATmega128L microprocessor, the proposed algorithm reduces the amount of required RAM by up to 50 % while maintaining the speed at the same level. We also verify the feasibility of our algorithm by applying it to the elliptic curve cryptosystem. 相似文献
20.
Yong-Joon Kim Young-Taek Oh Seung-Hyun Yoon Myung-Soo Kim Gershon Elber 《Computer aided design》2013,45(2):270-276
We present an interactive-speed algorithm for computing the Hausdorff Distance (HD) between two freeform geometric models represented with NURBS surfaces. The algorithm is based on an effective technique for matching a surface patch from one model to the corresponding nearby surface patch on the other model. To facilitate the matching procedure, we employ a bounding volume hierarchy (BVH) for freeform NURBS surfaces, which provides a hierarchy of Coons patches and bilinear surfaces approximating the NURBS surfaces (Kim et al., 2011 [1]). Comparing the local HD upper bound against a global HD lower bound, we can eliminate the majority of redundant surface patches from further consideration. The resulting algorithm and the associated data structures are considerably simpler than the previous BVH-based HD algorithms. As a result, we can compute the HD of two freeform geometric models efficiently and robustly even when the two models are in close proximity. We demonstrate the effectiveness of our approach using several experimental results. 相似文献