首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
三维散乱数据的非均匀简化算法   总被引:1,自引:0,他引:1  
熊邦书  雷鸰 《计算机工程》2004,30(22):32-33,97
提出了一种三维散乱数据的非均匀简化算法,该算法首先将数据集的最小包围盒分割成许多大小相同的立方体,然后,在每个含有数据点的立方体内,计算其所包含曲面的局部法向变化量,并根据用户给定的局部法向变化量门限值将整个三维数据集进行非均匀简化。应用实例验证了算法的有效性。  相似文献   

2.
基于隐式曲面的水平集表达、隐式曲面上的内蕴梯度概念和图像分割的标记函数方法,建立了隐式曲面上多相图像分割的水平集模型,并设计了相应的Split Bregman方法.首先,将分段常值与光滑平面图像两相分割的Chan-Vese模型推广到隐式曲面上图像分割的变分水平集模型,并根据图像分割的二值标记函数和凸松弛的概念将该模型转化为全局凸优化的极值问题;然后借助n-1个水平集函数划分n个区域的区域特征函数,将隐式曲面上两相图像分割变分模型推广到了多相图像分割,并利用凸优化方法将该模型的变分问题松弛为一系列凸子优化过程.通过引进辅助变量和Bregman迭代参数设计的Split Bregman方法,将每个子优化问题转化为简单的Poisson方程求解和解析的软阈值公式.数值算例结果表明,文中方法在计算效率方面要优于传统的方法.  相似文献   

3.
针对逆向工程中的点云数据预处理,分析了现有数据精简方法的不足,提出了基于八叉树编码的均匀精简方法。应用八叉树编码法划分点云邻域空间为多个指定边长的子立方体,保留每个子立方体中距中心点最近的点,实现从空间整体角度对点云的精简。对涡轮叶片测量数据进行了精简测试,证明了该算法的有效性和实用性。  相似文献   

4.
对三维模型和点云曲面重构方法进行深入研究,根据应用特点提出八叉树空间分割和N U RBS曲面重构方法。利用八叉树的快速收敛特性对三维实体的点云数据进行分割、精简,采用N U RBS方法对局部网格曲面进行重构;采用八叉树和四叉树相混合的数据结构,渐进地进行网格曲面的重构。存储结构采用扩展式八叉树结构,编码采用8进制前缀编码方法。利用O penG L设计一个实验模型系统验证了该算法的可行性和有效性。  相似文献   

5.
提出一种新的海量空间数据点k近邻的快速搜索算法.本算法综合考虑了空间数据的范围、数据点的总数、近邻点数目k以及数据点的密度,给出了一种新的估算子立方体边长的方法;采用空间分块策略,把数据空间划分成多个子立方体,子立方体的大小决定k近邻的搜索速度;最后记录每个子立方体所包含的数据点及每个点所属的子立方体编号,搜索测点的k近邻.大量数据的实验结果表明本算法可以大大提高在海量空间数据点中搜索测点k近邻的速度.  相似文献   

6.
《计算机工程》2017,(2):252-256
针对点云曲面边界提取算法计算量大、时间耗费多的问题,提出一种点云曲面的二次边界提取算法。采用空间包围盒法将点云曲面均匀地分为若干个小立方体,将每个点都放入一个立方体内,并通过每个立方体周围非空子立方体的个数以及分布情况提取边界子立方体。结合点云曲面数据点的分布特征,在边界子立方体内将目标点的所有K近邻点投影到以目标点为中心的平面上,计算投影点与中心点形成的向量与某条坐标轴的夹角,通过判断其是否满足预先设定的条件来判定目标点是否为边界点。实验结果表明,该方法可有效减少计算量,提高提取精度。  相似文献   

7.
非接触式扫描方法获得点云数据存在大量的冗余数据。为便于模型重构,针对点云数据精简是必不可少的数据预处理手段,提出了一种基于空间分割和曲率特征信息的点云数据精简算法。通过K-邻域计算、二次曲面拟合、曲率估算和曲率阈值可调的数据分区等关键精简技术,实现了对同一数据不同区域应用不同精简算法,进行不同比例的数据精简。实例验证表明,该算法能适应各种类型曲面数据的精简要求,保证精简效率的同时,很好地保留点云的特征信息。  相似文献   

8.
保留边界的点云简化方法   总被引:3,自引:0,他引:3  
针对点云简化算法中边界点丢失的问题,提出了一种保留边界的三维散乱点云的非均匀简化算法。首先利用kd-tree建立散乱数据点云的空间拓扑关系,计算出每个数据点的k邻域;然后针对目前依据点云分布均匀性算法提取边界效率低的问题,提出一种改进的点云边界点判定算法;最后保留所有边界点,对非边界点,根据曲面变分值和k邻域点已保留比例,进行点云的非均匀简化。实验结果表明,该算法精度高,空间复杂度低,而且简化后点云边界保留完整。  相似文献   

9.
刘光帅  李柏林 《计算机应用》2012,32(12):3361-3364
针对校准摄像机采集系列图像的三维分割重构问题,提出了一种新的面向概率描述的变分方法。首先,计算系列图像的极大似然曲面,可重构与分割保持一致的三维曲面;接着,融合联合概率,可重构目标对象及图像背景的平均强度及标准差;最后,采用水平集框架,可实现对曲面能量方程的数值模拟。该方法适用于复杂拓扑结构重构及噪声数据处理。实验结果表明,该方法实用性好,鲁棒性强,对任意三维对象的分割重构效果较形状雕刻方法及体视分割方法理想。  相似文献   

10.
点云分割是根据空间、几何和纹理等特征对点云进行划分,使得同一划分内的点云具有相似的特征。首先对获取的散乱点云数据进行去噪、填补空洞和畸变等预处理,然后计算最小包围立方体分割点云空间并构建八叉树加速邻域点的搜索,为每个点构造最小二乘邻域,分析散乱点云数据的高斯曲率和平均曲率,再通过区域生长法得到低噪声的精确分块,自适应、智能化地对点云进行分块。经实验验证,该方法可以获得较好的分割效果。  相似文献   

11.
部分向量奇偶位切分的LFSR重新播种方法   总被引:1,自引:0,他引:1  
提出一种基于部分测试向量奇偶位切分的LFSR重新播种测试方法.针对确定测试集中各个测试向量包含确定位的位数有较大差异以及测试向量所含的确定位大多连续成块的特点,通过奇偶切分部分确定位较多的向量,使得编码压缩的LFSR度数得到有效降低,从而提高了测试数据压缩率.其解压缩电路仍然采用单个LFSR进行解码与切分向量的合并.与目前国际同类编码压缩方法相比,具有测试数据压缩率高、解压硬件开销低、测试数据传输协议简单等特点.  相似文献   

12.
封闭数据立方是一种有效的无损压缩技术,它去掉了数据立方中的冗余信息,从而有效降低了数据立方的存储空间、加快了计算速度,而且几乎不影响查询性能.Hadoop的MapReduce并行计算模型为数据立方的计算提供了技术支持,Hadoop的分布式文件系统HDFS为数据立方的存储提供了保障.为了节省存储空间、加快查询速度,在传统数据立方的基础上提出封闭直方图立方,它在封闭数据立方的基础上通过编码技术进一步节省了存储空间,通过建立索引加快了查询速度.Hadoop并行计算平台不论从扩展性还是均衡性都为封闭直方图立方提供了保证.实验证明:封闭直方图立方对数据立方进行了有效压缩,具有较高的查询性能,根据Hadoop的特点通过增加节点个数明显加快了计算速度.  相似文献   

13.
Data analysis applications typically aggregate data across manydimensions looking for anomalies or unusual patterns. The SQL aggregatefunctions and the GROUP BY operator produce zero-dimensional orone-dimensional aggregates. Applications need the N-dimensionalgeneralization of these operators. This paper defines that operator, calledthe data cube or simply cube. The cube operator generalizes the histogram,cross-tabulation, roll-up,drill-down, and sub-total constructs found in most report writers.The novelty is that cubes are relations. Consequently, the cubeoperator can be imbedded in more complex non-procedural dataanalysis programs. The cube operator treats each of the Naggregation attributes as a dimension of N-space. The aggregate ofa particular set of attribute values is a point in this space. Theset of points forms an N-dimensional cube. Super-aggregates arecomputed by aggregating the N-cube to lower dimensional spaces.This paper (1) explains the cube and roll-up operators, (2) showshow they fit in SQL, (3) explains how users can define new aggregatefunctions for cubes, and (4) discusses efficient techniques tocompute the cube. Many of these features are being added to the SQLStandard.  相似文献   

14.
On-line analytical processing (OLAP) typically involves complex aggregate queries over large datasets. The data cube has been proposed as a structure that materializes the results of such queries in order to accelerate OLAP. A significant fraction of the related work has been on Relational-OLAP (ROLAP) techniques, which are based on relational technology. Existing ROLAP cubing solutions mainly focus on “flat” datasets, which do not include hierarchies in their dimensions. Nevertheless, as shown in this paper, the nature of hierarchies introduces several complications into the entire lifecycle of a data cube including the operations of construction, storage, indexing, query processing, and incremental maintenance. This fact renders existing techniques essentially inapplicable in a significant number of real-world applications and mandates revisiting the entire cube lifecycle under the new perspective. In order to overcome this problem, the CURE algorithm has been recently proposed as an efficient mechanism to construct complete cubes over large datasets with arbitrary hierarchies and store them in a highly compressed format, compatible with the relational model. In this paper, we study the remaining phases in the cube lifecycle and introduce query-processing and incremental-maintenance algorithms for CURE cubes. These are significantly different from earlier approaches, which have been proposed for flat cubes constructed by other techniques and are inadequate for CURE due to its high compression rate and the presence of hierarchies. Our methods address issues such as cube indexing, query optimization, and lazy update policies. Especially regarding updates, such lazy approaches are applied for the first time on cubes. We demonstrate the effectiveness of CURE in all phases of the cube lifecycle through experiments on both real-world and synthetic datasets. Among the experimental results, we distinguish those that have made CURE the first ROLAP technique to complete the construction and usage of the cube of the highest-density dataset in the APB-1 benchmark (12 GB). CURE was in fact quite efficient on this, showing great promise with respect to the potential of the technique overall.  相似文献   

15.
New Algorithm for Computing Cube on Very Large Compressed Data Sets   总被引:2,自引:0,他引:2  
Data compression is an effective technique to improve the performance of data warehouses. Since cube operation represents the core of online analytical processing in data warehouses, it is a major challenge to develop efficient algorithms for computing cube on compressed data warehouses. To our knowledge, very few cube computation techniques have been proposed for compressed data warehouses to date in the literature. This paper presents a novel algorithm to compute cubes on compressed data warehouses. The algorithm operates directly on compressed data sets without the need of first decompressing them. The algorithm is applicable to a large class of mapping complete data compression methods. The complexity of the algorithm is analyzed in detail. The analytical and experimental results show that the algorithm is more efficient than all other existing cube algorithms. In addition, a heuristic algorithm to generate an optimal plan for computing cube is also proposed  相似文献   

16.
MapReduce环境下的并行Dwarf立方构建   总被引:1,自引:0,他引:1  
针对数据密集型应用,提出了一种基于MapReduce框架的并行Dwarf数据立方构建算法.算法将传统Dwarf立方等价分割为多个独立的子Dwarf立方,采用MapReduce架构,实现了Dwarf立方的并行构建、查询和更新.实验证明,并行Dwarf算法一方面结合了MapReduce框架的并行性和高可扩展性,另一方面结合...  相似文献   

17.
The design of an OLAP system for supporting real-time queries is one of the major research issues. One approach is to use data cubes, which are materialized precomputed multidimensional views of data in a data warehouse. We can derive a set of data cubes to answer each frequently asked query directly. However, there are two practical problems: (1) the maintenance cost of the data cubes, and (2) the query cost to answer those queries. Maintaining a data cube requires disk storage and CPU computation, so the maintenance cost is related to the total size as well as the total number of data cubes materialized. In most cases, materializing all data cubes is impractical. The maintenance cost may be reduced by merging some data cubes. However, the resulting larger data cubes will increase the query cost of answering some queries. If the bounds on the maintenance cost and the query cost are too strict, we help the user decide which queries to be sacrificed and not taken into consideration. We have defined an optimization problem in data cube system design. Given a maintenance-cost bound, a query-cost bound and a set of frequently asked queries, it is necessary to determine a set of data cubes such that the system can answer a largest subset of the queries without violating the two bounds. This is an NP-hard problem. We propose approximate Greedy algorithms GR, 2GM and 2GMM, which are shown to be both effective and efficient by experiments done on a census data set and a forest-cover-type data set.  相似文献   

18.
数据方体系统设计中的优化问题   总被引:2,自引:0,他引:2  
支持实时查询的联机分析处理系统的设计是当前一个很重要的研究问题。其中常用的方法是使用数据方体来实现。对于出现频率较高的查询,可以给出对应的数据方体集,使得每个查询都可以直接得到回答。但是在设计基于方体的系统时,需要考虑以下两个问题:(1)数据方体的维护成本,(2)回答频繁查询的响应时间。在用户给出了维护成本上限和响应时间上限后,需要对数据方体集进行优化,使得系统能够满足用户的要求,并回答尽可能多的查询。文章给出了数据方体系统设计优化问题的定义,这是一个NP完全问题,并提出了贪心删除和贪心合并的近似算法。实验表明了算法的有效性。  相似文献   

19.
杨学兵 《微机发展》2002,12(6):52-54
对经典关联规则挖掘算法进行深入研究的基础上,结合数据立方体的结构特点和OLAP技术,给出了一种高效的多维关联规则挖掘算法,并对不同数据立方体下的算法的性能进行了分析比较。  相似文献   

20.
多特征方用于计算复杂的数据挖掘查询,在2n个粒度进行多个依赖的复杂聚集计算。现有的立方体粒度计算技术可以有效计算分布和代数多特征方,针对整体多特征方提出了优化策略:先将立方体水平分块,然后采用冰山查询技术动态选择数据以及局部分布聚集特性优化计算过程。该优化策略既减少了计算复杂度又节省了聚集计算时间,实验结果表明该计算策略比基本的解决方法性能提高一倍以上。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号