共查询到19条相似文献,搜索用时 46 毫秒
1.
适用于GPU的四面体体数据规则化与可视化 总被引:1,自引:0,他引:1
为实现三维不规则体数据场的高效绘制,提出一种适用于GPU的四面体体数据规则化和可视化算法.将以四面体为基本单元的稀疏体数据用一个有限深度的八叉树结构逼近,并将逼近误差表达为一个离散的完全空间哈希结构;然后将半规则的八叉树转换为规则的八叉树纹理(三维),并将完全空间哈希表转换为三维查找表,两者均可在绘制时快速随机取值,故可直接作为三维纹理在GPU中访问.通过这种双规则化的表示方法,可将四面体体数据的可视化转化为在GPU中并行地绘制2种三维纹理.实验结果表明,该算法在处理空间稀疏体数据时保证了较高的精度,同时减少了数据存储量. 相似文献
2.
一种高效体数据压缩算法及其在地震数据处理中的应用 总被引:2,自引:0,他引:2
采用可编程图形硬件对大规模体数据进行直接体绘制时常常受到图形卡容量的限制,导致数据在内存与显存之间频繁交换,从而成为绘制的瓶颈.为此,提出一种大规模体数据矢量量化压缩算法.首先对体数据分块,并依据块内数据平均梯度值是否为0对该块进行分类;然后用3层结构表示梯度值非0的块,对其中次高层和最高层采用基于主分量分析分裂法产生初始码书,用LBG算法进行码书优化和量化,而对最低层以及梯度值为0的块采用定比特量化.实验结果表明,在保证较好图像重构质量的前提下,该算法可获得50倍以上的压缩比和更快的解压速度. 相似文献
3.
面向飞行器表面流场数据可视化的应用需求,提出一种基于线性卷积(LIC)及纹理平流(IBFVS)相结合的动态纹理可视化方法。算法通过将IBFVS方法的背景随机噪声替换为LIC纹理方式,结合了LIC纹理结果对比度高及IBFVS方法生成速度快的优势;针对LIC绘制速度慢的不足,利用GPU对曲面矢量场投影并插值,生成规则矢量数据场;用GPU对LIC部分进行并行加速,有效提高了LIC纹理图像产生速度;将LIC结果图像加入到IBFVS进行平流,生成纹理图像,最后加入颜色映射,丰富流场信息。实验结果表明,该方法生成的飞行器表面动态纹理图像对比度高,清晰度强,实时绘制性能好。 相似文献
4.
体可视化是一门交互地视觉呈现三维数据场的技术,允许用户直观方便地理解数据中所包含的几何结构和特征信息,在医学影像、石油勘探、大气气象和科学计算模拟等领域得到广泛应用.由于三维数据场内部特征的复杂性,实现高表达力的体可视化需要同时考虑数据的特性和用户的因素.基于感知的体可视化技术在可视化流程中考虑人类感知过程和认知过程的特点,使得可视化的结果符合人类感知特性和习惯,达到有效地揭示数据特征的目的.文中从用户驱动的体数据分析、符合感知的可视化设计和表意性体可视化3个方面对基于感知的体可视化的相关技术进行梳理总结,并指出该领域需要进一步探索的研究方向. 相似文献
5.
医学图象三维可视化具有极大的医学研究和临床诊疗应用前景,是现代医学影象研究的重要领域。医学图象三维可视化方法通常分为表面绘制和体绘制,已有许多具体的算法提出,一些算法兼具表面绘制和体绘制的特点,归为混合绘制方法,该文概述了一些典型算法,并讨论了其特点及相互联系,同时对各类方法的应用场合及前景了分析和评价。 相似文献
6.
三维地质体可视化技术是快速、及时地再现地质体三维信息及综合分析的有效途径。本文对基于矢量结构的三维地质体可视化系统进行了研究与设计。 相似文献
7.
为了将抽象的三维时变流场的运动数据以直观的图形图像显示出来,采用三维线积分卷积(3D LIC)算法与直接体渲染(DVR)技术相结合的方法对流场数据进行可视化.沿着每一点的流线方向,从正、反两个方向对白噪声纹理进行卷积,然后使用直接体渲染技术对卷积结果进行渲染输出.为提高计算效率,算法引入了并行处理机制,使用GPU对大规... 相似文献
8.
基于OpenGLES2.0平台,针对移动设备的硬件局限性,文中提出了一种新颖的大型三维体数据模型渲染技术,可以克服这些限制,实现在这些平台上以精细的高分辨率绘制大型三维医学体数据模型。另外,提出了一种软件架构,这种架构使现有的体绘制技术可以更好地应用于移动设备。通过采用不同的体数据模型,以逐渐提高分辨率的方式进行了一组实验,结果证明文中的方法是可行的、健壮的,在性能受限的平台上实现了大型体数据模型的可视化。 相似文献
9.
基于Web的数据可视化技术初探 总被引:1,自引:0,他引:1
随着信息技术的不断发展,数据可视化技术不断受到了人们的关注。本文分析当前Web方式下几种流行的图形格式的优缺点,并结合开源项目JFreeChart和Batik,介绍了生成PNG以及SVG格式图形文件的实现方法。 相似文献
10.
11.
码书生成是基于矢量量化压缩体绘制的关键之一。在码书生成中,初始码书对码书生成算法有较大的影响。现有的码书初始化方法需要对原始海量数据进行多次迭代,数据频繁在硬盘、内存和GPU(图形处理器)之间进行数据传输,导致算法效率不高。本文针对码书生成的初始码书提取问题,提出了基于数据流聚类策略的初始码书生成算法。其基本思想是将海量三维数据体当作一个数据流(分块),对每一部分数据形成局部码书,再对所有的局部码书进行分类形成最终的初始码书。利用本方法可以极大的减少数据的读取和传输的次数,同时,充分利用GPU并行计算能力。通过仿真结果分析表明,本文提出的方法在效率上和效果上都有较大的提高。 相似文献
12.
B Marangelli 《Image and vision computing》1991,9(6):347-352
This paper describes a method for designing a codebook for vector quantization (VQ), based on preprocessing of the input data which makes them block-stationary, and on a criterion which takes into account the error visibility of the image to be coded. Test results, carried out at about 1.2 bits/pel bit rate, indicate that the proposed VQ enables reconstruction of images (both outside and inside the training set) with very low distortion, and exhibits high robustness, the variance of the SNR being sensibly lower than in the case of unprocessed data. 相似文献
13.
Stanley C. Ahalt Prakoon Chen Cheng-Taou Chou Tzyy-Ping Jung 《The Journal of supercomputing》1992,5(4):307-330
We describe an implementation of a vector quantization codebook design algorithm based on the frequencysensitive competitive learning artificial neural network. The implementation, designed for use on high-performance computers, employs both multitasking and vectorization techniques. A C version of the algorithm tested on a CRAY Y-MP8/864 is discussed. We show how the implementation can be used to perform vector quantization, and demonstrate its use in compressing digital video image data. Two images are used, with various size codebooks, to test the performance of the implementation. The results show that the supercomputer techniques employed have significantly decreased the total execution time without affecting vector quantization performance.This work was supported by a Cray University Research Award and by NASA Lewis research grant number NAG3-1164. 相似文献
14.
In this contribution, we deal with active learning, which gives the learner the power to select training samples. We propose a novel query algorithm for local learning models, a class of learners that has not been considered in the context of active learning until now. Our query algorithm is based on the idea of selecting a query on the borderline of the actual classification. This is done by drawing on the geometrical properties of local models that typically induce a Voronoi tessellation on the input space, so that the Voronoi vertices of this tessellation offer themselves as prospective query points. The performance of the new query algorithm is tested on the two-spirals problem with promising results. 相似文献
15.
High-dimensional data is pervasive in many fields such as engineering, geospatial, and medical. It is a constant challenge to build tools that help people in these fields understand the underlying complexities of their data. Many techniques perform dimensionality reduction or other “compression” to show views of data in either two or three dimensions, leaving the data analyst to infer relationships with remaining independent and dependent variables. Contextual self-organizing maps offer a way to represent and interact with all dimensions of a data set simultaneously. However, computational times needed to generate these representations limit their feasibility to realistic industry settings. Batch self-organizing maps provide a data-independent method that allows the training process to be parallelized and therefore sped up, saving time and money involved in processing data prior to analysis. This research parallelizes the batch self-organizing map by combining network partitioning and data partitioning methods with CUDA on the graphical processing unit to achieve significant training time reductions. Reductions in training times of up to twenty-five times were found while using map sizes where other implementations have shown weakness. The reduced training times open up the contextual self-organizing map as viable option for engineering data visualization. 相似文献
16.
This paper presents a cursive character recognizer, a crucial module in any cursive word recognition system based on a segmentation and recognition approach. The character classification is achieved by using support vector machines(SVMs) and a neural gas. The neural gas is used to verify whether lower and upper case version of a certain letter can be joined in a single class or not. Once this is done for every letter, the character recognition is performed by SVMs. A database of 57 293 characters was used to train and test the cursive character recognizer. SVMs compare notably better, in terms of recognition rates, with popular neural classifiers, such as learning vector quantization and multi-layer-perceptron. SVM recognition rate is among the highest presented in the literature for cursive character recognition. 相似文献
17.
针对当前关键词识别少资源或零资源场景下的要求, 提出一种基于音频自动分割技术和深度神经网络的关键词识别算法. 首先采用一种基于度量距离的改进型语音分割算法, 将连续语音流分割成孤立音节, 再将音节细分成和音素状态联系的短时音频片段, 分割后的音频片段具有段间特征差异大, 段内特征方差小的特点. 接着利用一种改进的矢量量化方法对音频片段的状态特征进行编码, 实现了关键词集内词的高精度量化编码和集外词的低精度量化编码. 最后以音节为识别单位, 采用压缩的状态转移矩阵作为音节的整体特征, 送入深度神经网络进行语音识别. 仿真结果表明, 该算法能从自然语音流中较为准确地识别出多个特定关键词, 算法易于理解、训练简便, 且具有较好的鲁棒性. 相似文献
18.
19.
We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn favorable feature representations for this special application: limited rank matrix learning vector quantization (LiRaM LVQ) and a Large Margin Nearest Neighbor (LMNN) approach. Both methods use labeled training data and provide a discriminant linear transformation of the original features, potentially to a lower dimensional space. The extracted color features are used to retrieve images from a database by a k-nearest neighbor search. We perform a comparison of retrieval rates achieved with extracted and original features for eight different standard color spaces. We achieved significant improvements in every examined color space. The increase of the mean correct retrieval rate lies between 10% and 27% in the range of k=1-25 retrieved images, and the correct retrieval rate lies between 84% and 64%. We present explicit combinations of RGB and CIE-Lab color features corresponding to healthy and lesion skin. LiRaM LVQ and the computationally more expensive LMNN give comparable results for large values of the method parameter κ of LMNN (κ≥25) while LiRaM LVQ outperforms LMNN for smaller values of κ. We conclude that feature extraction by LiRaM LVQ leads to considerable improvement in color-based retrieval of dermatologic images. 相似文献