首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 15 毫秒
1.
The book “Handbook of Finsler geometry” has been included with a CD containing an elegant Maple package, FINSLER, for calculations in Finsler geometry. Using this package, an example concerning a Finsler generalization of Einstein’s vacuum field equations was treated. In this example, the calculation of the components of the hv-curvature of Cartan connection leads to wrong expressions. On the other hand, the FINSLER package works only in dimension four. We introduce a new Finsler package in which we fix the two problems and solve them. Moreover, we extend this package to compute not only the geometric objects associated with Cartan connection but also those associated with Berwald, Chern and Hashiguchi connections in any dimension. These improvements have been illustrated by a concrete example. Furthermore, the problem of simplifying tensor expressions is treated. This paper is intended to make calculations in Finsler geometry more easier and simpler.  相似文献   

2.
In this paper, we investigate the use of heat kernels as a means of embedding the individual nodes of a graph in a vector space. The reason for turning to the heat kernel is that it encapsulates information concerning the distribution of path lengths and hence node affinities on the graph. The heat kernel of the graph is found by exponentiating the Laplacian eigensystem over time. In this paper, we explore how graphs can be characterized in a geometric manner using embeddings into a vector space obtained from the heat kernel. We explore two different embedding strategies. The first of these is a direct method in which the matrix of embedding co-ordinates is obtained by performing a Young–Householder decomposition on the heat kernel. The second method is indirect and involves performing a low-distortion embedding by applying multidimensional scaling to the geodesic distances between nodes. We show how the required geodesic distances can be computed using parametrix expansion of the heat kernel. Once the nodes of the graph are embedded using one of the two alternative methods, we can characterize them in a geometric manner using the distribution of the node co-ordinates. We investigate several alternative methods of characterization, including spatial moments for the embedded points, the Laplacian spectrum for the Euclidean distance matrix and scalar curvatures computed from the difference in geodesic and Euclidean distances. We experiment with the resulting algorithms on the COIL database.  相似文献   

3.
This paper investigates spectral approaches to the problem of point pattern matching. We make two contributions. First, we consider rigid point-set alignment. Here we show how kernel principal components analysis (kernel PCA) can be effectively used for solving the rigid point correspondence matching problem when the point-sets are subject to outliers and random position jitter. Specifically, we show how the point- proximity matrix can be kernelised, and spectral correspondence matching transformed into one of kernel PCA. Second, we turn our attention to the matching of articulated point-sets. Here we show label consistency constraints can be incorporated into definition of the point proximity matrix. The new methods are compared to those of Shapiro and Brady and Scott and Longuet-Higgins, together with multidimensional scaling. We provide experiments on both synthetic data and real world data.  相似文献   

4.
Canonical Correlation Analysis is a technique for finding pairs of basis vectors that maximise the correlation of a set of paired variables, these pairs can be considered as two views of the same object. This paper provides a convergence analysis of Canonical Correlation Analysis by defining a pattern function that captures the degree to which the features from the two views are similar. We analyse the convergence using Rademacher complexity, hence deriving the error bound for new data. The analysis provides further justification for the regularisation of kernel Canonical Correlation Analysis and is corroborated by experiments on real world data.  相似文献   

5.
Present study proposes a fast, accurate and automated segmentation approach of mammographic images using kernel based fuzzy c-means (FCM) clustering technique. This approach exploits the significant regional features of mammograms which address the properties of different breast densities. The proposed segmentation approach captures those regional features using appropriate kernel and hence apply fuzzy clustering technique for segmenting the masses. This study also introduces kernel based FCM (KFCM) approach in a folded way to process a combination of significant features simultaneously. Suitable choice of kernel size also assists to collect all possible variations of regional features with minimum blocking effect in the output results. Performances of the proposed methodology are analyzed qualitatively and quantitatively in compare to other clustering-based segmentation techniques. Since the proposed approach is able to resolve uncertain and imprecise characteristics of mammograms, it performs superior to other techniques. Convergence time of the proposed method is also assessed and compared with other conventional clustering techniques. Kernel based approach of the proposed segmentation technique reduces the number of data points for clustering and hence convergence speed improves over the conventional algorithms. This study also shows a variation of convergence speed of the proposed segmentation method with different image sizes.  相似文献   

6.
The urban-rural fringe, known as the region located between the urban and rural areas, is the frontier of urban expansion against rural reservations. Identifying this particular region precisely, which was usually simplified by researchers, is the most important prerequisite in studies related to urban-rural patterns. In this study, we proposed a new model, combining wavelet transform and kernel density estimation, to identify the urban-rural fringe based on land use data. After testing the model using Beijing City as a case study, it is proved that the model is able to delineate the boundaries of urban-rural fringe precisely with respect to different landscape patterns at different regions (central urban area, urban-rural fringe area, and outer rural area). Furthermore, due to the advantage of the self-adaptive-bandwidth kernel density estimation, the model can also distinguish some of the satellite towns from the central urban area and outer rural area with the boundaries of urban-rural fringe.  相似文献   

7.
Fault detection and diagnosis (FDD) in chemical process systems is an important tool for effective process monitoring to ensure the safety of a process. Multi-scale classification offers various advantages for monitoring chemical processes generally driven by events in different time and frequency domains. However, there are issues when dealing with highly interrelated, complex, and noisy databases with large dimensionality. Therefore, a new method for the FDD framework is proposed based on wavelet analysis, kernel Fisher discriminant analysis (KFDA), and support vector machine (SVM) classifiers. The main objective of this work was to combine the advantages of these tools to enhance the performance of the diagnosis on a chemical process system. Initially, a discrete wavelet transform (DWT) was applied to extract the dynamics of the process at different scales. The wavelet coefficients obtained during the analysis were reconstructed using the inverse discrete wavelet transform (IDWT) method, which were then fed into the KFDA to produce discriminant vectors. Finally, the discriminant vectors were used as inputs for the SVM classification task. The SVM classifiers were utilized to classify the feature sets extracted by the proposed method. The performance of the proposed multi-scale KFDA-SVM method for fault classification and diagnosis was analysed and compared using a simulated Tennessee Eastman process as a benchmark. The results showed the improvements of the proposed multiscale KFDA-SVM framework with an average 96.79% of classification accuracy over the multi-scale KFDA-GMM (84.94%), and the established independent component analysis-SVM method (95.78%) of the faults in the Tennessee Eastman process.  相似文献   

8.
The Gaussian kernel function implicitly defines the feature space of an algorithm and plays an essential role in the application of kernel methods. The parameter of Gaussian kernel function is a scalar that has significant influences on final results. However, until now, it is still unclear how to choose an optimal kernel parameter. In this paper, we propose a novel data-driven method to optimize the Gaussian kernel parameter, which only depends on the original dataset distribution and yields a simple solution to this complex problem. The proposed method is task irrelevant and can be used in any Gaussian kernel-based approach, including supervised and unsupervised machine learning. Simulation experiments demonstrate the efficacy of the obtained results. A user-friendly online calculator is implemented at: www.csbio.sjtu.edu.cn/bioinf/kernel/ for public use.  相似文献   

9.
The Poincaré code is a Maple project package that aims to gather significant computer algebra normal form (and subsequent reduction) methods for handling nonlinear ordinary differential equations. As a first version, a set of fourteen easy-to-use Maple  commands is introduced for symbolic creation of (improved variants of Poincaré’s) normal forms as well as their associated normalizing transformations. The software is the implementation by the authors of carefully studied and followed up selected normal form procedures from the literature, including some authors’ contributions to the subject. As can be seen, joint-normal-form programs involving Lie-point symmetries are of special interest and are published in CPC Program Library for the first time, Hamiltonian variants being also very useful as they lead to encouraging results when applied, for example, to models from computational physics like Hénon–Heiles.  相似文献   

10.
Using benchmark problems to demonstrate and compare novel methods to the work of others could be more widely adopted by the Soft Computing community. This article contains a collection of several benchmark problems in nonlinear control and system identification, which are presented in a standardized format. Each problem is augmented by examples where it has been adopted for comparison. The selected examples range from component to plant level problems and originate mainly from the areas of mechatronics/drives and process systems. The authors hope that this overview contributes to a better adoption of benchmarking in method development, test and demonstration.  相似文献   

11.
In this new version of ISICS, called ISICS2011, a few omissions and incorrect entries in the built-in file of electron binding energies have been corrected; operational situations leading to un-physical behavior have been identified and flagged.

New version program summary

Program title: ISICS2011Catalogue identifier: ADDS_v5_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADDS_v5_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 6011No. of bytes in distributed program, including test data, etc.: 130 587Distribution format: tar.gzProgramming language: CComputer: 80486 or higher-level PCsOperating system: WINDOWS XP and all earlier operating systemsClassification: 16.7Catalogue identifier of previous version: ADDS_v4_0Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1716.Does the new version supersede the previous version?: YesNature of problem: Ionization and X-ray production cross section calculations for ion–atom collisions.Solution method: Numerical integration of form factor using a logarithmic transform and Gaussian quadrature, plus exact integration limits.Reasons for new version: General need for higher precision in output format for projectile energies; some built-in binding energies needed correcting; some anomalous results occur due to faulty read-in data or calculated parameters becoming un-physical; erroneous calculations could result for the L and M shells when restricted K-shell options are inadvertently chosen; to achieve general compatibility with ISICSoo, a companion C++ version that is portable to Linux and MacOS platforms, has been submitted for publication in the CPC Program Library approximately at the same time as this present new standalone version of ISICS [1].Summary of revisions: The format field for projectile energies in the output has been expanded from two to four decimal places in order to distinguish between closely spaced energy values. There were a few entries in the executable binding energy file that needed correcting; K shell of Eu, M shells of Zn, M1 shell of Kr. The corrected values were also entered in the ENERGY.DAT file. In addition, an alternate data file of binding energies is included, called ENERGY_GW.DAT, which is more up-to-date [2]. Likewise, an alternate atomic parameters data file is now included, called FLOURE_JC.DAT, which is more up-to-date [3] fluorescence yields for the K and L shells and Coster–Kronig parameters for the L shell. Both data files can be read in using the -f usage option. To do this, the original energy file should be renamed and saved (e.g., ENERGY_BB.DAT) and the new file (ENERGY_GW.DAT ) should be duplicated as ENERGY.DAT to be read in using the -f option. Similarly for reading in an alternate FLOURE.DAT file. As with previous versions, the user can also simply input different values of any input quantity by invoking the “specify your own parameters” option from the main menu. You can also use this option to simply check the values of the built-in values of the parameters. If it still happens that a zero binding energy for a particular sub-shell is read in, the program will not completely abort, but will calculate results for the other sub-shells while setting the affected sub-shell output to zero. In calculating the Coulomb deflection factor, if the quantity inside the radical sign of the parameter zs becomes zero or negative, to prevent the program from aborting, the PWBA cross sections are still calculated while the ECPSSR cross sections are set to zero. This situation can happen for very low energy collisions, such as were noticed for helium ions on copper at energies of E?11.2 keV. It was observed during the engineering of ISICSoo [1] that erroneous calculations could result for the L- and M-shell cases when restricted K-shell R or HSR scaling options were inappropriately chosen. The program has now been fixed so that these inappropriate options are ignored for the L and M shells. In the previous versions, the usage for inputting a batch data file was incorrectly stated in the Users Manual as -Bxxx; the correct designation is -Fxxx, or alternatively, -Ixxx, as indicated on the usage screen in running the program.A revised Users Manual is also available.Restrictions: The consumed CPU time increases with the atomic shell (K, L, M), but execution is still very fast.Running time: This depends on which shell and the number of different energies to be used in the calculation. The running time is not significantly changed from the previous version.References:
  • [1] 
    M. Batic, M.G. Pia, S. Cipolla, Comput. Phys. Commun. (2011), submitted for publication.
  • [2] 
    http://www.jlab.org/~gwyn/ebindene.html.
  • [3] 
    J. Campbell, At. Data Nucl. Data Tables 85 (2003) 291.
  相似文献   

12.
Water distribution networks are large complex systems affected by leaks, which often entail high costs and may severely jeopardise the overall water distribution performance. Successful leak location is paramount in order to minimize the impact of these leaks when occurring. Sensor placement is a key issue in the leak location process, since the overall performance and success of this process highly depends on the choice of the sensors gathering data from the network. Common problems when isolating leaks in large scale highly gridded real water distribution networks include leak mislabelling and the obtention of large number of possible leak locations. This is due to similarity of leak effect in the measurements, which may be caused by topological issues and led to incomplete coverage of the whole network. The sensor placement strategy may minimize these undesired effects by setting the sensor placement optimisation problem with the appropriate assumptions (e.g. geographically cluster alike leak behaviors) and by taking into account real aspects of the practical application, such as the acceptable leak location distance. In this paper, a sensor placement methodology considering these aspects and a general sensor distribution assessment method for leak diagnosis in water distribution systems is presented and exemplified with a small illustrative case study. Finally, the proposed method is applied to two real District Metered Areas (DMAs) located within the Barcelona water distribution network.  相似文献   

13.
Building Information Modeling (BIM) has emerged as a new paradigm for the construction industry. The combination of electric distribution systems design and BIM is the future development trend of architectural design. The traditional electric distribution wiring design based on BIM is mainly carried out manually, which is laborious and error-prone. To cope with this challenge, some studies have proposed explorations of automated BIM wiring design based on graphs in building structures, HAVC, and AEC fields, but few efforts were made on automatic electric distribution wiring. In view of its complicated nature, actual construction factors such as design specifications, wiring costs, etc. should be considered with electric distribution wiring simultaneously. To address this issue, this study proposes a BIM-based automatic indoor electric distribution wiring (IEDW) scheme using graph theory and capacity-limited multiple traveling salesman problem (CMTSP). Firstly, the IEDW sets up the model for design specifications and actual wiring costs with four constraints of specification, cost, installation, and load. Then, a electric distribution wiring heterogeneous graph model is designed to solve both specification and installation constraints. Thirdly, a CMTSP-based algorithm is designed to solve the problem of automatic electric distribution wiring under cost and load constraints. Finally, the performance of IEDW is evaluated on three types of buildings. The experimental results proved that the IEDW can provide a more reasonable wiring solution than manual wiring in terms of circuit balance and total wiring length. Moreover, the IEDW can reduce the wiring cost by 8.8% on average. The proposed IEDW will enhance the efficiency of BIM-based electric systems design and delivery.  相似文献   

14.
Fires constitute one major ecological disturbance which influences the natural cycle of vegetation succession and the structure and function of ecosystems. There is no single natural scale at which ecological phenomena are completely understood and thus the capacity to handle scale is beneficial to methodological frameworks for analyzing and monitoring ecosystems. Although satellite imagery has been widely applied for the assessment of fire related topics, there are few studies that consider fire at several spatial scales simultaneously. This research explores the relationships between fire occurrence and several families of environmental factors at different spatial observation scales by means of classification and regression tree models. Predictors accounting for vegetation status (estimated by spectral indices derived from Landsat imagery), fire history, topography, accessibility and vegetation types were included in the models of fire occurrence probability. We defined four scales of analysis by identifying four meaningful thresholds related to fire sizes in the study site. Sampling methodology was based on random points and the power-law distribution describing the local fire regime. The observation scale drastically affected tree size, and therefore the achieved level of detail, and the most explanatory variables in the trees. As a general trend, trees considering all the variables showed a spectral index ruling the most explicative split. According to the comparison of the four pre-determined analysis scales, we propose the existence of three eventual organization levels: landscape patch or ecosystem level, local level and the basic level, the most heterogeneous and complex scale. Rules with three levels of complexity and applicability for management were defined in the tree models: (i) the repeated critical thresholds (predictor values across which fire characteristics change rapidly), (ii) the meaningful final probability classes and (iii) the trees themselves.  相似文献   

15.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号