首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 19 毫秒
1.
Three dimensional models play an important role in many applications; the problem is how to select the appropriate models from a 3D database rapidly and accurately. In recent years, a variety of shape representations, statistical methods, and geometric algorithms have been proposed for matching 3D shapes or models. In this paper, we propose a 3D shape representation scheme based on a combination of principal plane analysis and dynamic programming. The proposed 3D shape representation scheme consists of three steps. First, a 3D model is transformed into a 2D image by projecting the vertices of the model onto its principal plane. Second, the convex hall of the 2D shape of the model is further segmented into multiple disjoint triangles using dynamic programming. Finally, for each triangle, a projection score histogram and moments are extracted as the feature vectors for similarity searching. Experimental results showed the robustness of the proposed scheme, which resists translation, rotation, scaling, noise, and destructive attacks. The proposed 3D model retrieval method performs fairly well in retrieving models having similar characteristics from a database of 3D models.  相似文献   

2.
何辰  王磊  王春萌 《计算机应用》2016,36(2):546-550
针对三维(3D)网格模型的存储与网络传输问题,提出一种新颖的三维模型压缩算法。该算法基于对网格模型的切片处理,主要由以下三个步骤组成:切片顶点的计算、切片边界的均匀采样以及对切片所得图像的编码。对于一个给定的三维模型,首先,计算模型的包围盒;然后,沿包围盒长度最长的方向进行切片;同时计算切片与网格模型表面每条边的交点,构成一个多边形,这个多边形即为切片的边界;其次,对切片边界进行均匀的重采样,使每层切片具有相同的顶点数;最后,把每层的顶点坐标转化为极坐标形式,这样,所有层顶点的ρ-坐标以及θ-坐标能分别构成一张图像,原始的三维模型即能由这两张图像表示。这种表示方法具有以下两个明显的优势:第一,降低了数据的维度,有效减少了数据量;第二,具有极大的数据相关性,进一步减少了数据的熵。基于这两个优势,该算法对图像数据进行差值编码以及算术编码,最后得到压缩后的文件。与增量参数细化(IPR)方法相比,在解码模型同等质量的前提下,所提算法的编码效率提高了23%。实验结果表明,所提算法在模型存储和传输应用中能取得很好的压缩效率,有效减少了数据量。  相似文献   

3.
Connectivity compression techniques for very large 3D triangle meshes are based on clever traversals of the graph representing the mesh, so as to avoid the repeated references to vertices. In this paper we present a new algorithm for compressing large 3D triangle meshes through the successive conquest of triangle fans. The connectivity of vertices in a fan is implied. As each fan is traversed, the current mesh boundary is advanced by the fan-front. The process is recursively continued till the entire mesh is traversed. The mesh is then compactly encoded as a sequence of fan configuration codes. The fan configuration code comprehensively encodes the connectivity of the fan with the rest of the mesh. There is no need for any further special operators like split codes and additional vertex offsets. The number of fans is typically one-fourth of the total number of triangles. Only a few of the fan configurations occur with high frequency, enabling excellent connectivity information compression using range encoding. A simple implementation shows significant improvements, on the average, in bit-rate per vertex, compared to earlier reported techniques.  相似文献   

4.
Wavelet transforms have been widely used as effective tools in texture segmentation in the past decade. Segmentation of document images, which usually contain three types of texture information: text, picture and background, can be regarded as a special case of texture segmentation. B-spline wavelets possess some desirable properties such as being well localized in time and frequency, and being compactly supported, which make them an effective tool for texture analysis. Based on the observation that text textures provide fast-changed and relatively regular distributed edges in the wavelet transform domain, an efficient document segmentation algorithm is designed via cubic B-spline wavelets. Three-means or two-means classification is applied for classifying pixels with similar characteristics after feature estimation at the outputs of high frequency bands of spline wavelet transforms. We examine and evaluate the contributions of different factors to the segmentation results from the viewpoints of decomposition levels, frequency bands and wavelet functions. Further performance analysis reveals the advantages of the proposed method.  相似文献   

5.
结合分层编码与多描述编码的优势,提出三维网格分层多描述编码(LMDC)方法:先对3D网格进行几何分解,得到一个粗糙网格和细化网格所需的连通性信息,采用分层编码思想,粗糙网格作为基本层,而将细化信息作为增强层。同时采用基于顶点分裂树的多描述编码方法对基本层加以保护,保证基本层在差错信道中的有效传输。采用分层多描述编码对3D模型进行编码的方法,非常适合于带宽受限和多路径传输的异构网络。实验证明,该方法能获得较高的压缩率,并在有丢包的情况下能有效地保护并恢复出可接受的基本层网格。  相似文献   

6.
An efficient and robust algorithm for 3D mesh segmentation   总被引:4,自引:0,他引:4  
This paper presents an efficient and robust algorithm for 3D mesh segmentation. Segmentation is one of the main areas of 3D object modeling. Most segmentation methods decompose 3D objects into parts based on curvature analysis. Most of the existing curvature estimation algorithms are computationally costly. The proposed algorithm extracts features using Gaussian curvature and concaveness estimation to partition a 3D model into meaningful parts. More importantly, this algorithm can process highly detailed objects using an eXtended Multi-Ring (XMR) neighborhood based feature extraction. After feature extraction, we also developed a fast marching watershed-based segmentation algorithm followed by an efficient region merging scheme. Experimental results show that this segmentation algorithm is efficient and robust.  相似文献   

7.
This paper presents an efficient algorithm, called pattern reduction (PR), for reducing the computation time of k-means and k-means-based clustering algorithms. The proposed algorithm works by compressing and removing at each iteration patterns that are unlikely to change their membership thereafter. Not only is the proposed algorithm simple and easy to implement, but it can also be applied to many other iterative clustering algorithms such as kernel-based and population-based clustering algorithms. Our experiments—from 2 to 1000 dimensions and 150 to 10,000,000 patterns—indicate that with a small loss of quality, the proposed algorithm can significantly reduce the computation time of all state-of-the-art clustering algorithms evaluated in this paper, especially for large and high-dimensional data sets.  相似文献   

8.
In this paper, we present a fast global k-means clustering algorithm by making use of the cluster membership and geometrical information of a data point. This algorithm is referred to as MFGKM. The algorithm uses a set of inequalities developed in this paper to determine a starting point for the jth cluster center of global k-means clustering. Adopting multiple cluster center selection (MCS) for MFGKM, we also develop another clustering algorithm called MFGKM+MCS. MCS determines more than one starting point for each step of cluster split; while the available fast and modified global k-means clustering algorithms select one starting point for each cluster split. Our proposed method MFGKM can obtain the least distortion; while MFGKM+MCS may give the least computing time. Compared to the modified global k-means clustering algorithm, our method MFGKM can reduce the computing time and number of distance calculations by a factor of 3.78-5.55 and 21.13-31.41, respectively, with the average distortion reduction of 5,487 for the Statlog data set. Compared to the fast global k-means clustering algorithm, our method MFGKM+MCS can reduce the computing time by a factor of 5.78-8.70 with the average reduction of distortion of 30,564 using the same data set. The performances of our proposed methods are more remarkable when a data set with higher dimension is divided into more clusters.  相似文献   

9.
《Graphical Models》2014,76(5):440-456
We present an automatic mesh segmentation framework that achieves 3D segmentation in two stages, hierarchical spectral analysis and isoline-based boundary detection. During the hierarchical spectral analysis stage, a novel segmentation field is defined to capture a concavity-aware decomposition of eigenvectors from a concavity-aware Laplacian. Specifically, a sufficient number of eigenvectors is first adaptively selected and simultaneously partitioned into sub-eigenvectors through spectral clustering. Next, on the sub-eigenvectors level, we evaluate the confidence of identifying a spectral-sensitive mesh boundary for each sub-eigenvector by two joint measures, namely, inner variations and part oscillations. The selection and combination of sub-eigenvectors are thereby formulated as an optimization problem to generate a single segmentation field. In the isoline-based boundary detection stage, the segmentation boundaries are recognized by a divide-merge algorithm and a cut score, which respectively filters and measures desirable isolines from the concise single segmentation field. Experimental results on the Princeton Segmentation Benchmark and a number of other complex meshes demonstrate the effectiveness of the proposed method, which is comparable to recent state-of-the-art algorithms.  相似文献   

10.
三维网格分割中聚类分析技术综述   总被引:1,自引:0,他引:1  
三维网格分割是计算机图形学的一个重要的研究方向,近年来不断涌现出各种新的分割技术.主要关注基于聚类分析的三维网格分割技术,介绍了三维网格分割的2种常见类型,并对分割技术所转化的数学问题进行阐述,总结了一系列常用的网格属性.依据算法类型将现有算法划分为5类,所基于的分割技术分别有区域生长、多源区域生长、层次聚类、迭代聚类以及谱聚类.针对不同的分割目标和所利用的网格属性,对各分类下的分割算法进行对比讨论;同时给出4种角度的评估准则,以展示不同应用场景下各类分割算法的优缺点,并指出了三维网格分割的发展趋势和应用方向.  相似文献   

11.
12.
Differential evolution (DE) is a simple and efficient global optimization algorithm. However, DE has been shown to have certain weaknesses, especially if the global optimum should be located using a limited number of function evaluations (NFEs). Hence hybridization with other methods is a research direction for the improvement of differential evolution. In this paper, a hybrid DE based on the one-step k-means clustering and 2 multi-parent crossovers, called clustering-based differential evolution with 2 multi-parent crossovers (2-MPCs-CDE) is proposed for the unconstrained global optimization problems. In 2-MPCs-CDE, k cluster centers and several new individuals generate two search spaces. These spaces are then searched in turn. This method utilizes the information of the population effectively and improves search efficiency. Hence it can enhance the performance of DE. A comprehensive set of 35 benchmark functions is employed for experimental verification. Experimental results indicate that 2-MPCs-CDE is effective and efficient. Compared with other state-of-the-art evolutionary algorithms, 2-MPCs-CDE performs better, or at least comparably, in terms of the solution accuracy and the convergence rate.  相似文献   

13.
接骨板作为一种常见的固定断骨用医疗器械,需要与骨骼密切贴合。然而目前接骨板大多依据临床经验,或者仅参考少数骨骼的几何形状进行设计,不能满足需求。因此对骨骼三维模型数据集进行分析,为接骨板系列化设计提供依据。该方法首先利用一致顶点漂移非刚体配准算法与基于拉普拉斯坐标的网格变形算法,使骨骼数据重网格化成拓扑一致的图形;再通过主成分分析、聚类分析,获得一系列“平均骨骼”。按此方法设计的接骨板,能够更好地满足与断骨的贴合性约束,达到实际应用需求。  相似文献   

14.
In this paper we explore a recent iterative compression technique called non-negative matrix factorization (NMF). Several special properties are obtained as a result of the constrained optimization problem of NMF. For facial images, the additive nature of NMF results in a basis of features, such as eyes, noses, and lips. We explore various methods for efficiently computing NMF, placing particular emphasis on the initialization of current algorithms. We propose using Spherical K-Means clustering to produce a structured initialization for NMF. We demonstrate some of the properties that result from this initialization and develop an efficient way of choosing the rank of the low-dimensional NMF representation.  相似文献   

15.
Three-dimensional free form shape matching is a fundamental problem in both the machine vision and pattern recognition literatures. However, the automatic approach to 3D free form shape matching still remains open. In this paper, we propose using k closest points in the second view for the automatic 3D free form shape matching. For the sake of computational efficiency, the optimised k-D tree is employed for the search of the k closest points. Since occlusion and appearance and disappearance of points almost always occur, slack variables have to be employed, explicitly modelling outliers in the process of matching. Then the relative quality of each possible point match is estimated using the graduated assignment algorithm, leading the camera motion parameters to be estimated by the quaternion method in the weighted least-squares sense. The experimental results based on both synthetic data and real images without any pre-processing show the effectiveness and efficiency of the proposed algorithm for the automatic matching of overlapping 3D free form shapes with either sparse or dense points.  相似文献   

16.
三维形状分割是三维形状分析中的一个重要问题.为了使分割结果能适应非刚体丰富的姿态变化,提出一种基于扩散几何的三维网格分割方法.该方法采用波核特征的局部极值点作为非刚体网格模型表面的显著特征点;进而将显著特征点作为初始聚类中心,采用K-均值聚类算法来获得分割结果.实验结果表明,文中方法不仅对处于不同姿态的非刚体三维形状具有良好的分割一致性,而且对噪声、孔洞等具有较好的鲁棒性.  相似文献   

17.
Angularity is a critically important property in terms of the performance of natural particulate materials. It is also one of the most difficult to measure objectively using traditional methods. Here we present an innovative and efficient approach to the determination of particle angularity using image analysis. The direct use of three-dimensional data offers a more robust solution than the two-dimensional methods proposed previously. The algorithm is based on the application of mathematical morphological techniques to range imagery, and effectively simulates the natural wear processes by which rock particles become rounded. The analysis of simulated volume loss is used to provide a valuable measure of angularity that is geometrically commensurate with the traditional definitions. Experimental data obtained using real particle samples are presented and results correlated with existing methods in order to demonstrate the validity of the new approach. The implementation of technologies such as these has the potential to offer significant process optimisation and environmental benefits to the producers of aggregates and their composites. The technique is theoretically extendable to the quantification of surface texture.  相似文献   

18.
Face detection is a crucial preliminary in many applications. Most of the approaches to face detection have focused on the use of two-dimensional images. We present an innovative method that combines a feature-based approach with a holistic one for three-dimensional (3D) face detection. Salient face features, such as the eyes and nose, are detected through an analysis of the curvature of the surface. Each triplet consisting of a candidate nose and two candidate eyes is processed by a PCA-based classifier trained to discriminate between faces and non-faces. The method has been tested, with good results, on some 150 3D faces acquired by a laser range scanner.  相似文献   

19.
A new strategy for automatic object extraction in highly complex scenes is presented in this paper. The method proposed gives a solution for 3D segmentation avoiding most restrictions imposed in other techniques. Thus, our technique is applicable on unstructured 3D information (i.e. cloud of points), with a single view of the scene, scenes consisting of several objects where contact, occlusion and shadows are allowed, objects with uniform intensity/texture and without restrictions of shape, pose or location. In order to have a fast segmentation stopping criteria, the number of objects in the scene is taken as input. The method is based on a new distributed segmentation technique that explores the 3D data by establishing a set of suitable observation directions. For each exploration viewpoint, a strategy [3D data]-[2D projected data]-[2D segmentation]-[3D segmented data] is accomplished. It can be said that this strategy is different from current 3D segmentation strategies. This method has been successfully tested in our lab on a set of real complex scenes. The results of these experiments, conclusions and future improvements are also shown in the paper.  相似文献   

20.
Data mining is the process of extracting desirable knowledge or interesting patterns from existing databases for specific purposes. Mining association rules from transaction data is most commonly seen among the mining techniques. Most of the previous mining approaches set a single minimum support threshold for all the items and identify the relationships among transactions using binary values. In the past, we proposed a genetic-fuzzy data-mining algorithm for extracting both association rules and membership functions from quantitative transactions under a single minimum support. In real applications, different items may have different criteria to judge their importance. In this paper, we thus propose an algorithm which combines clustering, fuzzy and genetic concepts for extracting reasonable multiple minimum support values, membership functions and fuzzy association rules from quantitative transactions. It first uses the k-means clustering approach to gather similar items into groups. All items in the same cluster are considered to have similar characteristics and are assigned similar values for initializing a better population. Each chromosome is then evaluated by the criteria of requirement satisfaction and suitability of membership functions to estimate its fitness value. Experimental results also show the effectiveness and the efficiency of the proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号