首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper a new method for the combination of 2D GIS vector data and 2.5D DTM represented by triangulated irregular networks (TIN) to derive integrated triangular 2.5D object-based landscape models (also known as 2.5D-GIS-TIN) is presented. The algorithm takes into account special geometric constellations and fully exploits existing topologies of both input data sets, it “sews the 2D data into the TIN like a sewing-machine” while traversing the latter along the 2D data. The new algorithm is called radial topology algorithm. We discuss its advantages and limitations, and describe ways to eliminate redundant nodes generated during the integration process. With the help of four examples from practical work we show that it is feasible to compute and work with such integrated data sets. We also discuss the integrated data models in the light of various general requirements and conclude that the integration based on triangulations has a number of distinct advantages.
Christian HeipkeEmail:
  相似文献   

2.
《遥感技术与应用》2013,28(5):766-772
The glacier is an important natural and great potential of the fresh water resources,and plays a vital role in the regional ecological environment balance and stability.This study acquired the airborne hyperspectral data over Zhongxi-1 Glacier in August,2011.Firstly,the data preprocessing,including radiation calibration,atmospheric correction and geometric correction was performed on the hyperspectral data;secondly,using principal component analysis (PCA)and minimum noise transformation (MNF) for data dimensionality reduced respectively;thirdly,six classification methods,i.e.maximum likelihood method,minimum distance,Mahalanobis distance method,spectral angle method binary encoding,and spectral information divergence,were applied in the two datasets,and also the comparison results of the different classification methods were conducted to determine the optimal method of data dimensionality reduction and the optimal classification method;finally,the hyperspectral data for glacierclassification was compared with the HJ satellite multispectral data.The results show that: the classification accuracy of the PCA transform data from hyperspectral data is higher than that of MNF transform data;for the PCA transformed dataset of hyperspectraldata,the Mahalanobis distance method,maximum likelihood method,minimum distance method produced better classification results with the comparison to others,while for the MNF transformed dataset from hyperspectral data,the spectral angle method and spectral information divergence method is better than others.  相似文献   

3.
Given two ordered,labeled trees β and α,to find the distance from tree β to tree α is animportant problem in many fields,for example,the pattern recognition field.In this paper,a VLSIalgorithm for calculating the tree-to-tree distance is presented.The computation structure of the algorithmis a 2-D Mesh with the size m*n-and the time is O(m+n),where m,n are the numbers of nodesof the tree β and tree α,respectively.  相似文献   

4.
Given two ordered, labeled trees β and α, to find the distance from tree β to tree α is an important problem in many fields, for example, the pattern recognition field. In this paper, a VLSI algorithm for calculating the tree to tree distance is presented. The computation structure of the algorithm is a 2-D Mesh with the sizem*n and the time isO(m+n), wherem,n are the numbers of nodes of the tree β and tree α, respectively.  相似文献   

5.
The simplified Jacobi–Davidson (JD) method is a variant of the JD method without subspace acceleration. If the correction equation is solved approximately, the inexact simplified JD method is obtained. In this paper, we present a new convergence analysis of the inexact simplified JD method. The analysis applies to polynomial eigenvalue problems with simple eigenpairs. We first establish a relationship between the solution of the correction equation and the residual of the approximate eigenpair. From this relationship, we find the difference of two adjacent approximate eigenvalues bounded in terms of the residual norm of the approximate eigenpair. Then we prove the convergence of the inexact simplified JD method in terms of the residual norm of the approximate eigenpair. Depending on how accurately we solve the correction equation, the convergence rate of the inexact simplified JD may take several different forms. Numerical experiments confirm the convergence analysis.  相似文献   

6.
Fault diagnosis of liquid rocket propulsion systems (LRPSs) is a very important issue in space launch activities particularly when manned space missions are accompanied,since the safety and reliability can be significantly enhanced by exploiting an efficient fault diagnosis system.Currently,inverse problem-based diagnosis has attracted a great deal of research attention in fault diagnosis domain.This methodology provides a new strategy to model-based fault diagnosis for monitoring the health of propulsion systems. To solve the inverse problems arising from the fault diagnosis of LRPSs,GAs have been adopted in recent years as the first and effective choice of available numerical optimization tools.However,the GA has many control parameters to be chosen in advance and there still lack sound theoretical tools to analyze the effects of these parameters on diagnostic performance analytically.In this paper a comparative study of the influence of GA parameters on diagnostic results is conducted by performing a series of numerical experiments. The objective of this study is to investigate the contribution of individual algorithm parameter to final diagnostic result and provide reasonable estimates for choosing GA parameters in the inverse problem-based fault diagnosis of LRPSs.Some constructive remarks are made in conclusion and will be helpful for the implementation of GA to the fault diagnosis practice of LRPSs in the future.  相似文献   

7.
Edge detection from Fourier spectral data is important in many applications including image processing and the post-processing of solutions to numerical partial differential equations. The concentration method, introduced by Gelb and Tadmor in 1999, locates jump discontinuities in piecewise smooth functions from their Fourier spectral data. However, as is true for all global techniques, the method yields strong oscillations near the jump discontinuities, which makes it difficult to distinguish true discontinuities from artificial oscillations. This paper introduces refinements to the concentration method to reduce the oscillations. These refinements also improve the results in noisy environments. One technique adds filtering to the concentration method. Another uses convolution to determine the strongest correlations between the waveform produced by the concentration method and the one produced by the jump function approximation of an indicator function. A zero crossing based concentration factor, which creates a more localized formulation of the jump function approximation, is also introduced. Finally, the effects of zero-mean white Gaussian noise on the refined concentration method are analyzed. The investigation confirms that by applying the refined techniques, the variance of the concentration method is significantly reduced in the presence of noise. This work was partially supported by NSF grants CNS 0324957, DMS 0510813, DMS 0652833, and NIH grant EB 025533-01 (AG).  相似文献   

8.
In this paper,an improved algorithm is proposed for unconstrained global optimization to tackle non-convex nonlinear multivariate polynomial programming problems.The proposed algorithm is based on the Bernstein polynomial approach.Novel features of the proposed algorithm are that it uses a new rule for the selection of the subdivision point,modified rules for the selection of the subdivision direction,and a new acceleration device to avoid some unnecessary subdivisions.The performance of the proposed algorithm is numerically tested on a collection of 16 test problems.The results of the tests show the proposed algorithm to be superior to the existing Bernstein algorithm in terms of the chosen performance metrics.  相似文献   

9.
This paper proposes the principle of GPRS, the realization of GPRS modem and how to embed the TCP/IP protocols. Then we describe some problems that happen when the GPRS modem is on-line, and give some ways to solve them at last.  相似文献   

10.
Personnel specifications have greatest impact on total efficiency. They can help us to design work environment and enhance total efficiency. Determination of critical personnel attributes is a useful procedure to overcome complication associated with multiple inputs and outputs. The proposed algorithm assesses the impact of personnel efficiency attributes on total efficiency through Data Envelopment Analysis (DEA), Artificial Neural Network (ANN) and Rough Set Theory (RST). DEA has two roles in the proposed integrated algorithm of this study. It provides data ANN and finally it selects the best reduct through ANN result. Reduct is described as a minimum subset of attributes, completely discriminating all objects in a data set. The reduct selection is achieved by RST. ANN has two roles in the integrated algorithm. ANN results are basis for selecting the best reduct and it is also used for forecasting total efficiency. The proposed integrated approach is applied to an actual banking system and its superiorities and advantages are discussed.  相似文献   

11.
In 1994, S.G. Matthews introduced the notion of partial metric space in order to obtain a suitable mathematical tool for program verification (Ann. N.Y. Acad. Sci. 728:183–197, 1994). He gave an application of this new structure to parallel computing by means of a partial metric version of the celebrated Banach fixed point theorem (Theor. Comput. Sci. 151:195–205, 1995). Later on, M.P. Schellekens introduced the theory of complexity (quasi-metric) spaces as a part of the development of a topological foundation for the asymptotic complexity analysis of programs and algorithms (Electron. Notes Theor. Comput. Sci. 1:211–232, 1995). The applicability of this theory to the asymptotic complexity analysis of Divide and Conquer algorithms was also illustrated by Schellekens. In particular, he gave a new proof, based on the use of the aforenamed Banach fixed point theorem, of the well-known fact that Mergesort algorithm has optimal asymptotic average running time of computing. In this paper, motivated by the utility of partial metrics in Computer Science, we discuss whether the Matthews fixed point theorem is a suitable tool to analyze the asymptotic complexity of algorithms in the spirit of Schellekens. Specifically, we show that a slight modification of the well-known Baire partial metric on the set of all words over an alphabet constitutes an appropriate tool to carry out the asymptotic complexity analysis of algorithms via fixed point methods without the need for assuming the convergence condition inherent to the definition of the complexity space in the Schellekens framework. Finally, in order to illustrate and to validate the developed theory we apply our results to analyze the asymptotic complexity of Quicksort, Mergesort and Largesort algorithms. Concretely we retrieve through our new approach the well-known facts that the running time of computing of Quicksort (worst case behaviour), Mergesort and Largesort (average case behaviour) are in the complexity classes O(n2)\mathcal{O}(n^{2}), O(nlog2(n))\mathcal{O}(n\log_{2}(n)) and O(2(n-1)-log2(n))\mathcal{O}(2(n-1)-\log_{2}(n)), respectively.  相似文献   

12.
A new approach to the analysis of the non-recurrent ladder network is presented. Using the converse of the conventional specification for the series and shunt branches, a rapid method of analysis is evolved. The rules governing the combination of the series admittances and shunt impedances are shown to be of an extremely simple nature and easier to apply than those pertaining to the conventional specification. In addition, unlike in the latter case, the same rules apply to both voltage ratio and transfer impedance.  相似文献   

13.
Every stereovision application must cope with the correspondence problem. The space of the matching variables, often consisting of spatial coordinates, intensity and disparity, is commonly referred as the data term (space). Since the data is often noisy a-priori, preference is required to result a smooth disparity (or piecewise smooth). To this end, each local method (e.g. window correlation techniques) performs a regularization of the data space. In this paper we propose a geometric framework for anisotropic regularization of the data space seeking to preserve the discontinuities in this space when filtering out the noise. On the other hand, the global methods consider a non-regularized data term with a smoothing constraint imposed directly on the disparity. This paper also proposes a new idea where the data space is regularized in a global method prior to the disparity evaluation. The idea is implemented on the state of the art variational method. Experimental results on the Middlebury real images demonstrate the advantages of the proposed approach.
Nir SochenEmail:
  相似文献   

14.
Since PROLOG has been chosen as the Fifth Generation Computer's Kernal Language,it ispresently one of the hottest topics among computer scientists all over the world.Recently,theimplementation technique and the application of PROLOG have been developed rapidly.In thispaper,a new implementation scheme for PROLOG is proposed.The scheme is based on thesubstitution of instantiated veriable values.It has many advantages,such as a higher runningspeed,less main memory requirement,and easier to be implemented.The scheme has beenimplemented by the authors on IBM4341.  相似文献   

15.
The aims of the study were to understand the required skills for the digital publishing editors in a technology-driven publishing industry. The study used document analysis, literature analysis and expert interviews combined with the Delphi Technique to explore the required competences for digital publishing editors. The study adopted the functional analysis to establish the Competence Framework for digital publishing editors. The result of Delphi Technique questionnaire from a panel of selected experts in the five modalities ranged from 1 (not important at all) to 5 (very important) for the competences; For those with the mean value (,u) greater than 3.35, the competences were categorized into "must-have" (μ≥4.19) and "should-have" (3.35 ≤μ〈 4.19) groups. Fundamental Technology, Developing Digital Contents, Fundamental Knowledge, Commissioning-and-Acquisitions and Feedbacks were "must-have" competences; Design Digital Contents, Proofreading, Devices, Copy Editing, Substantive Editing, Project Management, Data Management, Promotions and Readers were "should-have" competences. This study results could provide some valuable references for the educators and learners of digital publishing related fields, help to assist with curriculum design and assessment, select and train students. It also can be used by the Digital Publishing Industry as standards in recruitment, training and evaluations of editors.  相似文献   

16.
Construction conception of an object requires multi-criterion analysis. In such a case, reliability analysis gives rough information on availability and fulfillment of main functions. In the paper, the analysis of drive system in river barge pusher is presented. It consists of Reliability Block Diagram (RBD) analysis of various composition of the system and Markov analysis based on prior estimated operational data.  相似文献   

17.
In this paper we present a Riemannian framework for smoothing data that are constrained to live in P(n)mathcal{P}(n), the space of symmetric positive-definite matrices of order n. We start by giving the differential geometry of P(n)mathcal{P}(n), with a special emphasis on P(3)mathcal{P}(3), considered at a level of detail far greater than heretofore. We then use the harmonic map and minimal immersion theories to construct three flows that drive a noisy field of symmetric positive-definite data into a smooth one. The harmonic map flow is equivalent to the heat flow or isotropic linear diffusion which smooths data everywhere. A modification of the harmonic flow leads to a Perona-Malik like flow which is a selective smoother that preserves edges. The minimal immersion flow gives rise to a nonlinear system of coupled diffusion equations with anisotropic diffusivity. Some preliminary numerical results are presented for synthetic DT-MRI data.  相似文献   

18.
A novel Fourier-based technique for local motion detection from image sequences is proposed. In this method, the instantaneous velocities of local image points are inferred directly from the global 3D Fourier components of the image sequence. This is done by selecting those velocities for which the superposition of the corresponding Fourier gratings leads to constructive interference at the image point. Hence, image velocities can be assigned locally even though position is computed from the phases and amplitudes of global Fourier components (spanning the whole image sequence) that have been filtered based on the motion-constraint equation, reducing certain aperture effects typically arising from windowing in other methods. Regularization is introduced for sequences having smooth flow fields. Aperture effects and their effect on optic-flow regularization are investigated in this context. The algorithm is tested on both synthetic and real image sequences and the results are compared to those of other local methods. Finally, we show that other motion features, i.e. motion direction, can be computed using the same algorithmic framework without requiring an intermediate representation of local velocity, which is an important characteristic of the proposed method.  相似文献   

19.
We propose a technique for handling recursive axioms in deductive databases. More precisely, we solve the following problem: Given a relational query including virtual relations defined from axioms (Horn clauses, with variables in the conclusion predefined in the hypotheses), which can be recursive, how to translate this query into arelational program, i. e. a set of relational operations concerning only real relations (not virtual). Our solution has the following properties:
  • ? the program to evaluate the query always terminates,
  • ? the relational program is produced by a pure compilation of a source query and of the axioms, and is independent of the data values (there is no run-time),
  • ? the relational operations are optimized: theyfocus towards the computation of the query, without needless computations.
  • As far as we know, the Alexander Method is the first solution exhibiting all these properties. This work is partly funded by Esprit Project 112 (KIMS).  相似文献   

    20.
    There is an increasing growth of Social Network Services (SNSs). A variety of SNSs are applied in online interpersonal platforms. Among them, asynchronous and synchronous discussions are widely examined. However, there is a lack of research into the effects of integrated discussion services that combines asynchronous and synchronous discussions. Thus, the study investigates users’ performances and behavior patterns in a mixed discussion model that integrates asynchronous and synchronous discussions with a lag sequential analysis. The results showed that most groups chose to adopt the mixed discussion model (i.e., using both synchronous and asynchronous discussions), and only one group totally adopted the asynchronous discussion model. The study further analyzed the learners’ learning effectiveness and behavioral patterns, and the results indicated that the groups using the mixed model had a positive performance to a certain extent in terms of learning effectiveness and knowledge construction. In addition, users with the mixed discussion model demonstrated diverse behaviors, which were more complex than that of those with a single-way discussion model. Furthermore, regarding the groups using “balanced synchronous and asynchronous discussions” and the groups “mainly using synchronous discussion supplemented by less asynchronous discussion” in the mixed model (simultaneously using synchronous and asynchronous discussions), those users who mainly used synchronous discussion supplemented by asynchronous discussion had more diverse behaviors of knowledge construction.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号