首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Texture analysis by using the Peano curve does not provide sufficient discriminatory power to classify natural textures. An improved method by simultaneously using several space filling curves to the texture analysis and classification is proposed.  相似文献   

2.
A. J. Fisher 《Software》1986,16(1):5-12
A new algorithm is described which generates the co-ordinates of the tth point along a Hilbert curve, given the value of the parameter t. The algorithm is expressed in the concurrent programming language occam.  相似文献   

3.
An iterative algorithm is described, based on the replication process of the Hilbert matrix, for encoding and decoding the Hilbert order. The time complexity of the proposed algorithm is smaller than those published previously, and the space complexity is bounded by a constant. Moreover, the new algorithm has a wider applicability when compared with existing algorithms for certain machine‐word lengths. A new variant of the Hilbert curve is suggested to overcome a shortcoming of the traditional Hilbert curve for the mapping problem. The proposed coding algorithms for the traditional Hilbert curve are also applicable to the new variant without increasing the time and space complexities. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
Geometric partial differential equations for curves and surfaces are used in many fields, such as computational geometry, image processing and computer graphics. In this paper, a few differential operators defined on space curves are introduced. Based on these operators, several second-order and fourth-order geometric flows for evolving space curves are constructed. Some properties of the changing rates of the arc-length of the evolved curves and areas swept by the curves are discussed. Short-term and long-term behaviors of the evolved curves are illustrated.  相似文献   

5.
现有过滤型特征选择算法并未考虑非线性数据的内在结构,从而分类准确率远远低于封装型算法,对此提出一种基于再生核希尔伯特空间映射的高维数据特征选算法。首先,基于分支定界法建立搜索树,并对其进行搜索;然后,基于再生核希尔伯特空间映射分析非线性数据的内部结构;最终,根据数据集的内部结构选择最优的距离计算方法。对比仿真实验结果表明,本方法与封装型特征选择算法具有接近的分类准确率,同时在计算效率上具有明显的优势,适用于大数据分析。  相似文献   

6.
Existing spatiotemporal indexes suffer from either large update cost or poor query performance, except for the B x -tree (the state-of-the-art), which consists of multiple B +-trees indexing the 1D values transformed from the (multi-dimensional) moving objects based on a space filling curve (Hilbert, in particular). This curve, however, does not consider object velocities, and as a result, query processing with a B x -tree retrieves a large number of false hits, which seriously compromises its efficiency. It is natural to wonder “can we obtain better performance by capturing also the velocity information, using a Hilbert curve of a higher dimensionality?”. This paper provides a positive answer by developing the B dual -tree, a novel spatiotemporal access method leveraging pure relational methodology. We show, with theoretical evidence, that the B dual -tree indeed outperforms the B x -tree in most circum- stances. Furthermore, our technique can effectively answer progressive spatiotemporal queries, which are poorly supported by B x -trees.  相似文献   

7.
The numerical evaluation of Hilbert transforms on the real line for functions that exhibit oscillatory behavior is investigated. A fairly robust numerical procedure is developed that is based on the use of convergence accelerator techniques. Several different types of oscillatory behavior are examined that can be successfully treated by the approach given. A few examples of functions whose oscillations are too extreme to deal with are also discussed.  相似文献   

8.
Cost curves: An improved method for visualizing classifier performance   总被引:10,自引:0,他引:10  
This paper introduces cost curves, a graphical technique for visualizing the performance (error rate or expected cost) of 2-class classifiers over the full range of possible class distributions and misclassification costs. Cost curves are shown to be superior to ROC curves for visualizing classifier performance for most purposes. This is because they visually support several crucial types of performance assessment that cannot be done easily with ROC curves, such as showing confidence intervals on a classifier's performance, and visualizing the statistical significance of the difference in performance of two classifiers. A software tool supporting all the cost curve analysis described in this paper is available from the authors. Editors: Tom Faweett  相似文献   

9.
The partitioning of an adaptive grid for distribution over parallel processors is considered in the context of adaptive multilevel methods for solving partial differential equations. A partitioning method based on the refinement-tree is presented. This method applies to most types of grids in two and three dimensions. For triangular and tetrahedral grids, it is guaranteed to produce connected partitions; no other partitioning method makes this guarantee. The method is related to the OCTREE method and space filling curves. Numerical results comparing it with several popular partitioning methods show that it computes partitions in an amount of time similar to fast load balancing methods like recursive coordinate bisection, and with mesh quality similar to slower, more optimal methods like the multilevel diffusive method in ParMETIS.  相似文献   

10.
In multi-dimensional databases the essential tool for accessing data is the range query (or window query). In this paper we introduce a new algorithm of processing range query in universal B-tree (UB-tree), which is an index structure for searching in multi-dimensional databases. The new range query algorithm (called the DRU algorithm) works efficiently, even for processing high-dimensional databases. In particular, using the DRU algorithm many of the UB-tree inner nodes need not to be accessed. We explain the DRU algorithm using a simple geometric model, providing a clear insight into the problem. More specifically, the model exploits an interesting relation between the Z-curve and generalized quad-trees. We also present experimental results for the DRU algorithm implementation.  相似文献   

11.
Artificial Color uses data from two or more spectrally overlapping sensitivity curves to assign class membership to pixels and ultimately to images. The usefulness of Artificial Color for various scene segmentation tasks has been shown in several recent papers, but those demonstrations all used sensitivity curves not optimized for the particular task, i.e. the R, G, B filters of commercial color cameras. This paper explores means to evolve suitable spectral sensitivity curves suited to any specialized task and illustrates that method with synthetic data chosen to be very hard to discriminate spectrally. Two special cases are illustrated. In one, a single Gaussian curve is used for a dichroic beamsplitter, so that the curve and its complement are used for discrimination. In the other case, two essentially orthogonal curves are utilized for the same task. The single Gaussian curve leads to poorer discrimination but better light efficiency relative to the two curves. Both do quite well on the difficult target problem.  相似文献   

12.
This paper presents a novel approach — WireWarping for computing a flattened planar piece with length-preserved feature curves from a 3D piecewise linear surface patch. The property of length-preservation on feature curves is very important to industrial applications for controlling the shape and dimension of products fabricated from planar pieces. WireWarping simulates warping a given 3D surface patch onto plane with the feature curves as tendon wires to preserve the length of their edges. During warping, the surface-angle variations between edges on wires are minimized so that the shape of a planar piece is similar to its corresponding 3D patch. Two schemes — the progressive warping and the global warping schemes are developed, where the progressive scheme is flexible for local shape control and the global scheme gives good performance on highly distorted patches. Experimental results show that WireWarping can successfully flatten surface patches into planar pieces while preserving the length of edges on feature curves.  相似文献   

13.
In this paper we present a novel approximate algorithm to calculate the top-k closest pairs join query of two large and high dimensional data sets. The algorithm has worst case time complexity and space complexity and guarantees a solution within a factor of the exact one, where t  {1, 2, … , ∞} denotes the Minkowski metrics Lt of interest and d the dimensionality. It makes use of the concept of space filling curve to establish an order between the points of the space and performs at most d + 1 sorts and scans of the two data sets. During a scan, each point from one data set is compared with its closest points, according to the space filling curve order, in the other data set and points whose contribution to the solution has already been analyzed are detected and eliminated. Experimental results on real and synthetic data sets show that our algorithm behaves as an exact algorithm in low dimensional spaces; it is able to prune the entire (or a considerable fraction of the) data set even for high dimensions if certain separation conditions are satisfied; in any case it returns a solution within a small error to the exact one.  相似文献   

14.
This paper proposes a new method for finding principal curves from data sets. Motivated by solving the problem of highly curved and self-intersecting curves, we present a bottom-up strategy to construct a graph called a principal graph for representing a principal curve. The method initializes a set of vertices based on principal oriented points introduced by Delicado, and then constructs the principal graph from these vertices through a two-layer iteration process. In inner iteration, the kernel smoother is used to smooth the positions of the vertices. In outer iteration, the principal graph is spanned by minimum spanning tree and is modified by detecting closed regions and intersectional regions, and then, new vertices are inserted into some edges in the principal graph. We tested the algorithm on simulated data sets and applied it to image skeletonization. Experimental results show the effectiveness of the proposed algorithm.  相似文献   

15.
在边境地区无线电台站审查、协调需要较高精度地理数据的背景下,提出了一种基于距离场的地图边界线的等距线计算方法。该方法使用规则栅格点构建距离场,通过对误差来源的分析,得到了用栅格点间距表示的误差上界。实际计算时,只需设定一个允许的最大误差值,即可自动完成对指定边界线的计算,得到多边形表示的所需精度的等距线。实验表明基于距离场的方法是可行的并且具有可控的精度,计算的等距线已应用到边境台站的技术审查中。  相似文献   

16.
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.  相似文献   

17.
This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of Type I error probability under various sample size and parameter combinations. In fact, Type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests—the Welch test, the James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples.  相似文献   

18.
Development of reliable medical decision support systems has been the subject of many studies among which Artificial Neural Networks (ANNs) gained increasing popularity and gave promising results. However, wider application of ANNs in clinical practice remains limited due to the lack of a standard and intuitive procedure for their configuration and evaluation which is traditionally a slow process depending on human experts. The principal contribution of this study is a novel procedure for obtaining ANN predictive models with high performances. In order to reach those considerations with minimal user effort, optimal configuration of ANN was performed automatically by Genetic Algorithms (GA). The only two user dependent tasks were selecting data (input and output variables) and evaluation of ANN threshold probability with respect to the Regret Theory (RT). The goal of the GA optimization was reaching the best prognostic performances relevant for clinicians: correctness, discrimination and calibration. After optimally configuring ANNs with respect to these criteria, the clinical usefulness was evaluated by the RT Decision Curve Analysis. The method is initially proposed for the prediction of advanced bladder cancer (BC) in patients undergoing radical cystectomy, due to the fact that it is clinically relevant problem with profound influence on health care. Testing on the data of the ten years cohort study, which included 183 evaluable patients, showed that soft max activation functions and good calibration were the most important for obtaining reliable BC predictive models for the given dataset. Extensive analysis and comparison with the solutions commonly used in literature showed that better prognostic performances were achieved while user-dependency was significantly reduced. It is concluded that presented procedure represents a suitable, robust and user-friendly framework with potential to have wide applications and influence in further development of health care decision support systems.  相似文献   

19.
In many practical scheduling problems, the concerns of the decision-maker may not be all known in advance and therefore may not be included in the initial problem definition as an objective function and/or as constraints. In such a case, the usual techniques of multi-objective optimization become inapplicable. To cope with this problem and to facilitate handling the concerns of the decision-maker, which can be implicit or qualitative, a dedicated methodological framework is needed. In this paper we propose a new two-step framework. First, we aim at obtaining a set of schedules that can be considered efficient with respect to a performance measure and at the same time different enough from one another to enable flexibility in the final choice. We formalize this new problem and suggest to address it with a multimodal optimization approach. Niching considerations are discussed for common scheduling problems. Through the flexibility induced with this approach, the additional considerations can be taken into account in a second step, which allows decision-makers to select an appropriate schedule among a set of sound schedules (in contrast to common optimization approaches, where usually a single solution is obtained and it is final). The proposed two-step approach can be used to handle a wide range of underlying scheduling problems. To show its potential and benefits we illustrate the framework on a set of hybrid flow shop instances that have been previously studied in the literature. We develop a multimodal genetic algorithm that employs an adapted version of the restricted tournament selection for niching purposes in the first step. The second step takes into account additional concerns of the decision-maker related to the ability of the schedules to absorb the negative effects due to random machine breakdowns. Our computational experiments indicate that the proposed framework is capable of generating numerous high-performance (mostly optimal) schedules. Additionally, our computational results demonstrate that the proposed framework provides the decision-maker a high flexibility in dealing with subsequent side concerns, since there are drastic differences in the capabilities of the efficient solutions found in Step 1 to absorb the negative impacts of machine breakdowns.  相似文献   

20.
Species distribution models (SDMs) have received increasing attention in freshwater management to support decision making. Existing SDMs are mainly data-driven and often developed with statistical and machine learning methods but with little consideration of hypothetic ecological knowledge. Conceptual SDMs exist, but lack in performance, making them less interesting for decision management. Therefore, there is a need for model identification tools that search for alternative model formulations. This paper presents a methodology, illustrated with the example of river pollution in Ecuador, using a simple genetic algorithm (SGA) to identify well performing SDMs by means of an input variable selection (IVS). An analysis for 14 macroinvertebrate taxa shows that the SGA is able to identify well performing SDMs. It is observed that uncertainty on the model structure is relatively large. The developed tool can aid model developers and decision makers to obtain insights in driving factors shaping the species assemblage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号