首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Smooth surface extraction using partial differential equations (PDEs) is a well-known and widely used technique for visualizing volume data. Existing approaches operate on gridded data and mainly on regular structured grids. When considering unstructured point-based volume data where sample points do not form regular patterns nor are they connected in any form, one would typically resample the data over a grid prior to applying the known PDE-based methods. We propose an approach that directly extracts smooth surfaces from unstructured point-based volume data without prior resampling or mesh generation. When operating on unstructured data one needs to quickly derive neighborhood information. The respective information is retrieved by partitioning the 3D domain into cells using a kd-tree and operating on its cells. We exploit neighborhood information to estimate gradients and mean curvature at every sample point using a four-dimensional least-squares fitting approach. Gradients and mean curvature are required for applying the chosen PDE-based method that combines hyperbolic advection to an isovalue of a given scalar field and mean curvature flow. Since we are using an explicit time-integration scheme, time steps and neighbor locations are bounded to ensure convergence of the process. To avoid small global time steps, we use asynchronous local integration. We extract the surface by successively fitting a smooth auxiliary function to the data set. This auxiliary function is initialized as a signed distance function. For each sample and for every time step we compute the respective gradient, the mean curvature, and a stable time step. With these informations the auxiliary function is manipulated using an explicit Euler time integration. The process successively continues with the next sample point in time. If the norm of the auxiliary function gradient in a sample exceeds a given threshold at some time, the auxiliary function is reinitialized to a signed distance function. After convergence of the evolution, the resulting smooth surface is obtained by extracting the zero isosurface from the auxiliary function using direct isosurface extraction from unstructured point-based volume data and rendering the extracted surface using point-based rendering methods.  相似文献   

2.
A software based normalized ECG data acquisition system is developed for both normal and abnormal ECG records. This system can transfer wave data recorded on paper to the digital time database. A flatbed scanner is used to form an image database of each 12 lead ECG signal. These TIF formatted gray tone images are then converted into two tone binary images with the help of histogram analysis. Smearing runlength technique is used to remove the vertical and horizontal line segments of graphical papers. Thinning algorithm is applied to each image to obtain the skeleton (1 pixel representation) of each image, which is essential to avoid excess data points in the database. After extracting pixel to pixel co-ordinate information of images of each of the signal of 12 lead ECG records, the data are sorted to regenerate the signal. From standard deviation of the database a graphical analysis is performed to examine the consistency of our database.  相似文献   

3.
Data sets resulting from physical simulations typically contain a multitude of physical variables. It is, therefore, desirable that visualization methods take into account the entire multi-field volume data rather than concentrating on one variable. We present a visualization approach based on surface extraction from multi-field particle volume data. The surfaces segment the data with respect to the underlying multi-variate function. Decisions on segmentation properties are based on the analysis of the multi-dimensional feature space. The feature space exploration is performed by an automated multi-dimensional hierarchical clustering method, whose resulting density clusters are shown in the form of density level sets in a 3D star coordinate layout. In the star coordinate layout, the user can select clusters of interest. A selected cluster in feature space corresponds to a segmenting surface in object space. Based on the segmentation property induced by the cluster membership, we extract a surface from the volume data. Our driving applications are Smoothed Particle Hydrodynamics (SPH) simulations, where each particle carries multiple properties. The data sets are given in the form of unstructured point-based volume data. We directly extract our surfaces from such data without prior resampling or grid generation. The surface extraction computes individual points on the surface, which is supported by an efficient neighborhood computation. The extracted surface points are rendered using point-based rendering operations. Our approach combines methods in scientific visualization for object-space operations with methods in information visualization for feature-space operations.  相似文献   

4.
Domain-specific knowledge is often recorded by experts in the form of unstructured text. For example, in the medical domain, clinical notes from electronic health records contain a wealth of information. Similar practices are found in other domains. The challenge we discuss in this paper is how to identify and extract part names from technicians repair notes, a noisy unstructured text data source from General Motors’ archives of solved vehicle repair problems, with the goal to develop a robust and dynamic reasoning system to be used as a repair adviser by service technicians. In the present work, we discuss two approaches to this problem. We present an algorithm for ontology-guided entity disambiguation that uses existing knowledge sources, such as domain-specific taxonomies and other structured data. We illustrate its use in the automotive domain, using GM parts ontology and the unit structure of repair manuals text to build context models, which are then used to disambiguate mentions of part-related entities in the text. We also describe extraction of part names with a small amount of annotated data using hidden Markov models (HMM) with shrinkage, achieving an f-score of approximately 80%. Next, we used linear-chain conditional random fields (CRF) in order to model observation dependencies present in the repair notes. Using CRF did not lead to improved performance, but a slight improvement over the HMM results was obtained by using a weighted combination of the HMM and CRF models.  相似文献   

5.
针对有向无环图支持向量机的元数据自动抽取机制问题进行了分析和研究,提出了基于此抽取机制和W3C资源描述框架的生物信息数据中的元数据(Meta-Data)自动抽取系统.有效地避免了分类重叠问题和抽取数据统一标记问题,为生物信息系统面向语义网应用扩展提供了整合数据基础.该自动抽取系统在生物信息系统面向语义网中具有广泛的应用前景.  相似文献   

6.
Data extraction and information retrieval from a great volume of data set always is a tedious and difficult work. Therefore, an effective and efficient technology for searching for desired data becomes increasingly important. Since metadata with certain attributes may characterize data files, to extract data with the help of metadata can be expectably to simplify the work. Metadata Classification has been proposed to improve significantly the performance of scientific data extraction. In this paper, a scientific data extraction architecture based on the assistance of metadata classification mechanism is proposed. The architecture is built by utilizing mediator/wrapper architecture to develop a scientific data extracting system to help oceanographer analyzing ocean’s ecology. The result of performance evaluation shows that the architecture with the help of metadata classification can extract user’s desired data effectively and efficiently.  相似文献   

7.
A fully automated wrapper for information extraction from Web pages is presented. The motivation behind such systems lies in the emerging need for going beyond the concept of "human browsing". The World Wide Web is today the main "all kind of information" repository and has been so far very successful in disseminating information to humans. By automating the process of information retrieval, further utilization by targeted applications is enabled. The key idea in our novel system is to exploit the format of the Web pages to discover the underlying structure in order to finally infer and extract pieces of information from the Web page. Our system first identifies the section of the Web page that contains the information to be extracted and then extracts it by using clustering techniques and other tools of statistical origin. STAVIES can operate without human intervention and does not require any training. The main innovation and contribution of the proposed system consists of introducing a signal-wise treatment of the tag structural hierarchy and using hierarchical clustering techniques to segment the Web pages. The importance of such a treatment is significant since it permits abstracting away from the raw tag-manipulating approach. Experimental results and comparisons with other state of the art systems are presented and discussed in the paper, indicating the high performance of the proposed algorithm.  相似文献   

8.
9.
Conceptual-model-based data extraction from multiple-record Web pages   总被引:7,自引:0,他引:7  
Electronically available data on the Web is exploding at an ever increasing pace. Much of this data is unstructured, which makes searching hard and traditional database querying impossible. Many Web documents, however, contain an abundance of recognizable constants that together describe the essence of a document's content. For these kinds of data-rich, multiple-record documents (e.g., advertisements, movie reviews, weather reports, travel information, sports summaries, financial statements, obituaries, and many others) we can apply a conceptual-modeling approach to extract and structure data automatically. The approach is based on an ontology – a conceptual model instance – that describes the data of interest, including relationships, lexical appearance, and context keywords. By parsing the ontology, we can automatically produce a database scheme and recognizers for constants and keywords, and then invoke routines to recognize and extract data from unstructured documents and structure it according to the generated database scheme. Experiments show that it is possible to achieve good recall and precision ratios for documents that are rich in recognizable constants and narrow in ontological breadth. Our approach is less labor-intensive than other approaches that manually or semiautomatically generate wrappers, and it is generally insensitive to changes in Web-page format.  相似文献   

10.
11.
Digital image processing is now widely available for users of remotely sensed data. Although such processing offers many new opportunities for the user (or analyst) it also makes heavy demands on the acquisition of new skills, if the data are to yield useful information efficiently. In deciding on the best approach for image classification the user faces a bewildering array of choices, many of which have been poorly evaluated. It is clear, however, that the use of both internal and external contextual information can be of great value in improving classification performance. The ultimate use of information extracted from remote sensing data is strongly affected by its compatability with other geographic data planes. Problems in achieving such compatibility in the framework of automated geographical information systems are discussed. The success of image analysis and classification methods is highly dependent on the relationships between the abilities of sensing systems themselves and the character of the phenomena being studied. This is illustrated by reference to the capabilities of future high resolution satellite systems.  相似文献   

12.
摘要:为了实现高光谱降维并保留重要的光谱特征,通过独立分量分析(Independent Component Analysis, ICA)混合模型和高光谱线性模型的对比分析,提出了结合纯像元提取和ICA的高光谱数据降维方法。该方法通过估计虚拟维数(Virtual Dimensionality, VD)确定特征个数,采用自动目标生成过程(Automatic Target Generation Process, ATGP)从原始数据中提取纯像元向量,作为ICA算法的初始化向量,以负熵为目标函数产生独立分量,并通过高阶统计量筛选实现高光谱数据的降维。分类实验结果表明,该方法不仅解决了传统ICA的随机排序问题,而且与经典降维算法主分量分析(Principal Components Analysis, PCA)相比,分类精度提高了6.83%,在大大降低高光谱数据量的情况下很好的保留了高光谱数据的特征,有利于数据的后续分析和应用。  相似文献   

13.
《微型机与应用》2016,(11):11-13
针对传统的在VC++平台上实现的GPS定位数据的提取与存储系统已经不能满足系统的实时性和可靠性等需求,以及软件方面存在扩展性、兼容性、移植性差等问题,运用GPS定位技术、多线程串口通信处理技术和数据库存储访问技术,应用Java语言编写,在Eclipse开发工具上设计并实现了一套GPS定位数据的实时提取与存储系统。测试结果表明,该系统运行稳定,实验数据有效可靠,达到了预期的目标。  相似文献   

14.
This study presents an intelligent model based on fuzzy systems for making a quantitative formulation between seismic attributes and petrophysical data. The proposed methodology comprises two major steps. Firstly, the petrophysical data, including water saturation (Sw) and porosity, are predicted from seismic attributes using various fuzzy inference systems (FISs), including Sugeno (SFIS), Mamdani (MFIS) and Larsen (LFIS). Secondly, a committee fuzzy inference system (CFIS) is constructed using a hybrid genetic algorithms-pattern search (GA-PS) technique. The inputs of the CFIS model are the outputs and averages of the FIS petrophysical data. The methodology is illustrated using 3D seismic and petrophysical data of 11 wells of an Iranian offshore oil field in the Persian Gulf. The performance of the CFIS model is compared with a probabilistic neural network (PNN). The results show that the CFIS method performed better than neural network, the best individual fuzzy model and a simple averaging method.  相似文献   

15.
Integrating a large number of Web information sources may significantly increase the utility of the World-Wide Web. A promising solution to the integration is through the use of a Web Information mediator that provides seamless, transparent access for the clients. Information mediators need wrappers to access a Web source as a structured database, but building wrappers by hand is impractical. Previous work on wrapper induction is too restrictive to handle a large number of Web pages that contain tuples with missing attributes, multiple values, variant attribute permutations, exceptions and typos. This paper presents SoftMealy, a novel wrapper representation formalism. This representation is based on a finite-state transducer (FST) and contextual rules. This approach can wrap a wide range of semistructured Web pages because FSTs can encode each different attribute permutation as a path. A SoftMealy wrapper can be induced from a handful of labeled examples using our generalization algorithm. We have implemented this approach into a prototype system and tested it on real Web pages. The performance statistics shows that the sizes of the induced wrappers as well as the required training effort are linear with regard to the structural variance of the test pages. Our experiment also shows that the induced wrappers can generalize over unseen pages.  相似文献   

16.
This paper presents a fault diagnostic system which enables the identification of engine malfunctions using the vibration data of crank systems with experimentation on one sample engine. In this system, the equations governing system characteristics are determined by a Time-domain Modal Analysis (TMA) technique. The frequencies, damping factors and mode shapres are used in a hormonic synthesis process to result in zero end torques and stresses in each crank shaft span. Two physical simulation cases were illustrated. In the first testing case, the abnormal excursion of the bearing support system was detected. The replacement of appropriate bearings is suggested. In the second testing case, the combined stresses of the fillet of the crankpin and journal units were found exceeded the allowable fatigue limit. The modification of both elements was suggested.  相似文献   

17.
In addition to Chinese character I/O, one of the most important issues for Chinese informationprocessing is automatic extraction of words from textual data. Having discussed the characteristics ofChinese words and sentences, we proved in this paper that this problem cannot be thoroughly resolved.Then, various algorithms for extraction of words from Chinese sentences are reviewed. Finally, a newalgorithm is put forward, based on which a highly automatic Chinese information processing system hasbeen developed.  相似文献   

18.
《Graphical Models》2014,76(6):593-608
Volumetric datasets have already been used in multiple domains. Recent improvements in acquisition devices have boosted the size of available datasets. We present an out-of-core algorithm for iso-surface extraction from huge volumetric data. Our algorithm uses a divide and conquer approach that divides the volume and processes every meta-cell sequentially. We combine our approach with a dual surface extraction algorithm in order to build adaptive meshes. Our solution produces patches of adaptive meshes that can finally be combined to generate a manifold and closed surface. As our approach processes only a part of the volume in-core, with a minimum of redundancy, it can handle very big volumes by modifying the meta-cells size to fit to the in-core memory available. Moreover, our algorithm can be parallelized in order to boost processing times and increase its interactivity. We present examples of the application of our solution to huge segmented volumes.  相似文献   

19.
The extraction of line-structured data from engineering drawings   总被引:2,自引:0,他引:2  
The aim of this research is to allow automatic conversion of an engineering drawing into a form similar to that produced by a computer-aided draughting system. Progress towards generating a representation of the drawing as a set of line segments and interpreted characters is described, within an overall strategy for planning the sampling of the image and the application of analysis algorithms. Examples of results so far achieved on a real drawing are shown.  相似文献   

20.
为监测高速列车的运行状态,采用多重分形分析列车振动监测数据,提取一种基于多重分形理论的广义维数谱参数(谱最大值Dmax、谱最大值与最小值之差△D和谱均值-D),以3种广义维数谱参数作为特征量,实现对列车正常状态、空气弹簧失效、抗蛇行减振器失效和横向减振器失效状态的表征.实验结果表明,Dmax、△D和-D均能够表征高速列车的运行状态且具有一定稳定性,为高速列车运行状态的识别提供了一种有效的方法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号