首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The Parallel Vectors (PV) Operator extracts the locations of points where two vector fields are parallel. In general, these features are line structures. The PV operator has been used successfully for a variety of problems, which include finding vortex‐core lines or extremum lines. We present a new generic feature extraction method for multiple 3D vector fields: The Approximate Parallel Vectors (APV) Operator extracts lines where all fields are approximately parallel. The definition of the APV operator is based on the application of PV for two vector fields that are derived from the given set of fields. The APV operator enables the direct visualization of features of vector field ensembles without processing fields individually and without causing visual clutter. We give a theoretical analysis of the APV operator and demonstrate its utility for a number of ensemble data.  相似文献   

2.
Sharp edges are important shape features and their extraction has been extensively studied both on point clouds and surfaces. We consider the problem of extracting sharp edges from a sparse set of colour‐and‐depth (RGB‐D) images. The noise‐ridden depth measurements are challenging for existing feature extraction methods that work solely in the geometric domain (e.g. points or meshes). By utilizing both colour and depth information, we propose a novel feature extraction method that produces much cleaner and more coherent feature lines. We make two technical contributions. First, we show that intensity edges can augment the depth map to improve normal estimation and feature localization from a single RGB‐D image. Second, we designed a novel algorithm for consolidating feature points obtained from multiple RGB‐D images. By utilizing normals and ridge/valley types associated with the feature points, our algorithm is effective in suppressing noise without smearing nearby features.  相似文献   

3.
Web page classification has become a challenging task due to the exponential growth of the World Wide Web. Uniform Resource Locator (URL)‐based web page classification systems play an important role, but high accuracy may not be achievable as URL contains minimal information. Nevertheless, URL‐based classifiers along with rejection framework can be used as a first‐level filter in a multistage classifier, and a costlier feature extraction from contents may be done in later stages. However, noisy and irrelevant features present in URL demand feature selection methods for URL classification. Therefore, we propose a supervised feature selection method by which relevant URL features are identified using statistical methods. We propose a new feature weighting method for a Naive Bayes classifier by embedding the term goodness obtained from the feature selection method. We also propose a rejection framework to the Naive Bayes classifier by using posterior probability for determining the confidence score. The proposed method is evaluated on the Open Directory Project and WebKB data sets. Experimental results show that our method can be an effective first‐level filter. McNemar tests confirm that our approach significantly improves the performance.  相似文献   

4.
The topological structure of scalar, vector, and second‐order tensor fields provides an important mathematical basis for data analysis and visualization. In this paper, we extend this framework towards higher‐order tensors. First, we establish formal uniqueness properties for a geometrically constrained tensor decomposition. This allows us to define and visualize topological structures in symmetric tensor fields of orders three and four. We clarify that in 2D, degeneracies occur at isolated points, regardless of tensor order. However, for orders higher than two, they are no longer equivalent to isotropic tensors, and their fractional Poincaré index prevents us from deriving continuous vector fields from the tensor decomposition. Instead, sorting the terms by magnitude leads to a new type of feature, lines along which the resulting vector fields are discontinuous. We propose algorithms to extract these features and present results on higher‐order derivatives and higher‐order structure tensors.  相似文献   

5.
6.
We present a dimension reduction and feature extraction method for the visualization and analysis of function field data. Function fields are a class of high-dimensional, multi-variate data in which data samples are one-dimensional scalar functions. Our approach focuses upon the creation of high-dimensional range-space segmentations, from which we can generate meaningful visualizations and extract separating surfaces between features. We demonstrate our approach on high-dimensional spectral imagery, and particulate pollution data from air quality simulations.  相似文献   

7.
With the rapid growth of networked data communications in size and complexity, network administrators today are facing more challenges to protect their networked computers and devices from all kinds of attacks. This paper proposes a new concentric-circle visualization method for visualizing multi-dimensional network data. This method can be used to identify the main features of network attacks, such as DDoS attack, by displaying their recognizable visual patterns. To reduce the edge overlaps and crossings, we arrange multiple axes displayed as concentric circles rather than the traditional parallel lines. In our method, we use polycurves to link values (vertexes) rather than polylines used in parallel coordinate approach. Some heuristics are applied in our new method in order to improve the readability of views. We discuss the advantages as well as the limitations of our new method. In comparison with the parallel coordinate visualization, our approach can reduce more than 15% of the edge overlaps and crossings. In the second stage of the method, we have further enhanced the readability of views by increasing the edge crossing angle. Finally, we introduce our prototype system: a visual interactive network scan detection system called CCScanViewer. It is based on our new visualization approach and the experiments have showed that the new approach is effective in detecting attack features from a variety of networking patterns, such as the features of network scans and DDoS attacks.  相似文献   

8.
Abstract: Feature extraction helps to maximize the useful information within a feature vector, by reducing the dimensionality and making the classification effective and simple. In this paper, a novel feature extraction method is proposed: genetic programming (GP) is used to discover features, while the Fisher criterion is employed to assign fitness values. This produces non‐linear features for both two‐class and multiclass recognition, reflecting the discriminating information between classes. Compared with other GP‐based methods which need to generate c discriminant functions for solving c‐class (c>2) pattern recognition problems, only one single feature, obtained by a single GP run, appears to be highly satisfactory in this approach. The proposed method is experimentally compared with some non‐linear feature extraction methods, such as kernel generalized discriminant analysis and kernel principal component analysis. Results demonstrate the capability of the proposed approach to transform information from the high‐dimensional feature space into a single‐dimensional space by automatically discovering the relationships between data, producing improved performance.  相似文献   

9.
Feature-based flow visualization is naturally dependent on feature extraction. To extract flow features, often higher order properties of the flow data are used such as the Jacobian or curvature properties, implicitly describing the flow features in terms of their inherent flow characteristics (for example, collinear flow and vorticity vectors). In this paper, we present recent research that leads to the (not really surprising) conclusion that feature extraction algorithms need to be extended to a time-dependent analysis framework (in terms of time derivatives) when dealing with unsteady flow data. Accordingly, we present two extensions of the parallel-vectors-based vortex extraction criteria to the time-dependent domain and show the improvements of feature-based flow visualization in comparison to the steady versions of this extraction algorithm both in the context of a high-resolution data set, that is, a simulation specifically designed to evaluate our new approach and for a real-world data set from a concrete application.  相似文献   

10.
Traditionally, many science fields require great support for a massive workflow, which utilizes multiple cores simultaneously. In order to support such large-scale scientific workflows, high-capacity parallel systems such as supercomputers are widely used. To increase the utilization of these systems, most schedulers use backfilling policy based on user’s estimated runtime. However, it is found to be extremely inaccurate because users overestimate their jobs. Therefore, in this paper, an efficient machine learning approach is present to predict the runtime of parallel application. The proposed method is divided into three phases. First is to analyze important feature of the history log data by factor analysis. Second is to carry out clustering for the parallel program based on the important features. Third is to build a prediction models by pattern similarity of parallel program log data and estimate runtime. In the experiments, we use workload logs on parallel systems (i.e., NASA-iPSC, LANL-CM5, SDSC-Par95, SDSC-Par96, and CTC-SP2) to evaluate the effectiveness of our approach. Comparing root-mean-square error with other techniques, experimental results show that the proposed method improves the accuracy up to 69.56%.  相似文献   

11.
The evolution of strain and development of material anisotropy in models of the Earth’s mantle flow convey important information about how to interpret the geometric relationship between observation of seismic anisotropy and the actual mantle flow field. By combining feature extraction techniques such as path line integration and tensor accumulation, we compute time‐varying strain vector fields that build the foundation for a number of feature extraction and visualization techniques. The proposed field segmentation, clustering, histograms and multi‐volume visualization techniques facilitate an intuitive understanding of three‐dimensional strain in such flow fields, overcoming limitations of previous methods such as 2‐D line plots and slicing. We present applications of our approach to an artificial time varying flow data set and a real world example of stationary flow in a subduction zone and discuss the challenges of processing these geophysical data sets as well as the insights gained.  相似文献   

12.
Robust feature extraction is an integral part of scientific visualization. In unsteady vector field analysis, researchers recently directed their attention towards the computation of near‐steady reference frames for vortex extraction, which is a numerically challenging endeavor. In this paper, we utilize a convolutional neural network to combine two steps of the visualization pipeline in an end‐to‐end manner: the filtering and the feature extraction. We use neural networks for the extraction of a steady reference frame for a given unsteady 2D vector field. By conditioning the neural network to noisy inputs and resampling artifacts, we obtain numerically stabler results than existing optimization‐based approaches. Supervised deep learning typically requires a large amount of training data. Thus, our second contribution is the creation of a vector field benchmark data set, which is generally useful for any local deep learning‐based feature extraction. Based on Vatistas velocity profile, we formulate a parametric vector field mixture model that we parameterize based on numerically‐computed example vector fields in near‐steady reference frames. Given the parametric model, we can efficiently synthesize thousands of vector fields that serve as input to our deep learning architecture. The proposed network is evaluated on an unseen numerical fluid flow simulation.  相似文献   

13.
Feature-based flow visualization is naturally dependent on feature extraction. To extract flow features, often higher-order properties of the flow data are used such as the Jacobian or curvature properties, implicitly describing the flow features in terms of their inherent flow characteristics (e.g., collinear flow and vorticity vectors). In this paper we present recent research which leads to the (not really surprising) conclusion that feature extraction algorithms need to be extended to a time-dependent analysis framework (in terms of time derivatives) when dealing with unsteady flow data. Accordingly, we present two extensions of the parallel vectors based vortex extraction criteria to the time-dependent domain and show the improvements of feature-based flow visualization in comparison to the steady versions of this extraction algorithm both in the context of a high-resolution dataset, i.e., a simulation specifically designed to evaluate our new approach, as well as for a real-world dataset from a concrete application.  相似文献   

14.
We propose a novel vortex core line extraction method based on the λ2 vortex region criterion in order to improve the detection of vortex features for 3D flow visualization. The core line is defined as a curve that connects λ2 minima restricted to planes that are perpendicular to the core line. The basic algorithm consists of the following stages: (1) λ2 field construction and isosurface extraction; (2) computation of the curve skeleton of the λ2 isosurface to build an initial prediction for the core line; (3) correction of the locations of the prediction by searching for λ2 minima on planes perpendicular to the core line. In particular, we consider the topology of the vortex core lines, guaranteeing the same topology as the initial curve skeleton. Furthermore, we propose a geometry‐guided definition of vortex bifurcation, which represents the split of one core line into two parts. Finally, we introduce a user‐guided approach in order to narrow down vortical regions taking into account the graph of λ2 along the computed vortex core line. We demonstrate the effectiveness of our method by comparing our results to previous core line detection methods with both simulated and experimental data; in particular, we show robustness of our method for noise‐affected data.  相似文献   

15.
为了有效改善高光谱图像数据分类的精确度,减少对大数目数据集的依赖,在原型空间特征提取方法的基础上,提出一种基于加权模糊C均值算法改进型原型空间特征提取方案。该方案通过加权模糊C均值算法对每个特征施加不同的权重,从而保证提取后的特征含有较高的有效信息量,从而达到减少训练数据集而不降低分类所需信息量的效果。实验结果表明,与业内公认的原型空间提取算法相比,该方案在相对较小的数据集下,其性能仍具有较为理想的稳定性,且具有相对较高的分类精度。  相似文献   

16.
This paper investigates the potential of multitemporal/polarization C‐band SAR data for land‐cover classification. Multitemporal Radarsat‐1 data with HH polarization and ENVISAT ASAR data with VV polarization acquired in the Yedang plain, Korea are used for the classification of typical five land‐cover classes in an agricultural area. The presented methodologies consist of two analytical stages: one for feature extraction and the other for classification based on the combination of features. Both a traditional SAR signal property analysis‐based approach and principal‐component analysis (PCA) are applied in the feature extraction stage. Special concerns are in the interpretation of each principal component by using principal‐component loading. The tau model applied as a decision‐level fusion methodology can provide a formal framework in which the posteriori probabilities derived from different sensor data can be combined. From the case study results, the combination of PCA‐based features showed improved classification accuracy for both Radarsat‐1 and ENVISAT ASAR data, as compared with the traditional SAR signal property analysis‐based approach. The integration of PCA‐based features based on multiple polarization (i.e. HH from Radarsat‐1, and both VV and VH from ENVISAT ASAR) and different incidence angles contributed to a significant improvement of discrimination capability for dry fields which could not be properly classified by using only Radarsat‐1 or ENVISAT ASAR data, and thus showed the best classification accuracy. The results of this case study indicate that the use of multiple polarization SAR data with a proper feature extraction stage would improve classification accuracy in multitemporal SAR data classification, although further consideration should be given to the polarization and incidence angle dependency of complex land‐cover classes through more experiments.  相似文献   

17.
We present an approach for extracting extremal feature lines of scalar indicators on surface meshes, based on discrete Morse Theory. By computing initial Morse‐Smale complexes of the scalar indicators of the mesh, we obtain a candidate set of extremal feature lines of the surface. A hierarchy of Morse‐Smale complexes is computed by prioritizing feature lines according to a novel criterion and applying a cancellation procedure that allows us to select the most significant lines. Given the scalar indicators on the vertices of the mesh, the presented feature line extraction scheme is interpolation free and needs no derivative estimates. The technique is insensitive to noise and depends only on one parameter: the feature significance. We use the technique to extract surface features yielding impressive, non photorealistic images.  相似文献   

18.
Bimodal biometrics has been found to outperform single biometrics and are usually implemented using the matching score level or decision level fusion, though this fusion will enable less information of bimodal biometric traits to be exploited for personal authentication than fusion at the feature level. This paper proposes matrix-based complex PCA (MCPCA), a feature level fusion method for bimodal biometrics that uses a complex matrix to denote two biometric traits from one subject. The method respectively takes the two images from two biometric traits of a subject as the real part and imaginary part of a complex matrix. MCPCA applies a novel and mathematically tractable algorithm for extracting features directly from complex matrices. We also show that MCPCA has a sound theoretical foundation and the previous matrix-based PCA technique, two-dimensional PCA (2DPCA), is only one special form of the proposed method. On the other hand, the features extracted by the developed method may have a large number of data items (each real number in the obtained features is called one data item). In order to obtain features with a small number of data items, we have devised a two-step feature extraction scheme. Our experiments show that the proposed two-step feature extraction scheme can achieve a higher classification accuracy than the 2DPCA and PCA techniques.  相似文献   

19.
本文提出了一种新的特征抽取方法。它可对任意形状的工件进行准确描述,并具有计算速度快、特征抽取结果简单的特点。用此算法做成的视觉系统,能识别任意放置的甚至有遮挡或重叠的二维工件,是一种较好满足工业生产线上实时性要求的算法。  相似文献   

20.
任务执行时间估计是云数据中心环境下工作流调度的前提.针对现有工作流任务执行时间预测方法缺乏类别型和数值型数据特征的有效提取问题,提出了基于多维度特征融合的预测方法.首先,通过构建具有注意力机制的堆叠残差循环网络,将类别型数据从高维稀疏的特征空间映射到低维稠密的特征空间,以增强类别型数据的解析能力,有效提取类别型特征;其次,采用极限梯度提升算法对数值型数据进行离散化编码,通过对稠密空间的输入向量进行稀疏化处理,提高了数值型特征的非线性表达能力;在此基础上,设计多维异质特征融合策略,将所提取的类别型、数值型特征与样本的原始输入特征进行融合,建立基于多维融合特征的预测模型,实现了云工作流任务执行时间的精准预测;最后,在真实云数据中心集群数据集上进行了仿真实验.实验结果表明,相对于已有的基准算法,该方法具有较高的预测精度,可用于大数据驱动的云工作流任务执行时间预测.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号