首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
Several methods of quantitative biostratigraphy that are based on assemblage zones are examined utilizing three sets of faunal distribution data. Two of the data sets are structured simply and one is complex. Various types of cluster analysis and multidimensional scaling are applied to weighted and unweighted binary (presence/absence) data. For weighted data, the presences are multiplied by the relative biostratigraphic values (RBV) of the taxa. There are two options for calculating the RBV's. One method (RBV2) emphasizes timestratigraphic correlation, whereas the other is a compromise between time-stratigraphic correlation and biofacies correlation (RBV1). Results from lateral tracing also are examined.The case studies allow the formulation of a general strategy. Weighting is not appropriate if paleoecological groupings are sought. If biostratigraphic zonation is required, weighted data may produce clusters that are stratigraphically more homogeneous than those based on unweighted data. Also the RBV's can point out species that can be deleted from the analysis without losing significant biostratigraphic information. Range through data should be employed in most situations. Similarity matrices between samples can be calculated from various coefficients based on presences. Biostratigraphic zones are extracted from the similarity matrix by cluster analysis, multidimensional scaling and lateral tracing to produce an overall view of the data structure. Lines of correlation and fence diagrams can be constructed between the samples in adjacent stratigraphic sections using the same techniques.  相似文献   

2.
The three primary biostratigraphic attributes of fossils comprise, geographical range, facies distribution, and vertical range. All attributes can be quantified in several manners. Relative biostratigraphic value (RBV) represents the amount of information which can be gained from observing the presence of a selected species. Several measures are available, three of which are discussed here. RBV1 weights all three attributes equally and this index is a comprise between time-stratigraphic correlation and establishing the persistence of a particular biofacies. RBV2 assigns double weight to the vertical range compared to the other two parameters; this index is designed to identify species that are useful for time correlation. For some data sets a reasonable estimate, RBV3, can be obtained solely from the vertical range. The biostratigraphic attributes and RBV's can be applied in several manners. Examples are: to determine the relations between the parameters, comparison of the biostratigraphic properties of different data sets and groups of taxa, use RBV's as weighting functions, and to select subsets of taxa with high RBV's for correlation.  相似文献   

3.
Two species are defined as compatible if their chronologic cooccurrence has been observed (= real association) or can be deduced from biostratigraphic data (= virtual association). An unitary association (U.A.) is a maximal set of compatible species. Each U.A. is characterized by a set of species and/or by a set of species pairs: these characteristic elements are used to identify the U.A. in fossiliferous beds. The U.A.'s which can be identified in a large geographical area are said to be reproducible. A biochronological scale is an ordered sequence of reproducible U.A.'s. The problem of constructing such a discrete “time” scale is approached from a graph-theoretic point of view.  相似文献   

4.
A FORTRAN program which involves simple iterative, averaging, and sorting operations is effective for seriation of biostratigraphic data. The data matrix includes the presence/absence of m taxa and n samples which are grouped in p stratigraphic sections. The basic procedure is to arrange the taxa and samples into a range chart by concentrating the presences along the diagonal of the matrix in order to minimize the range zones of the taxa. The method can calculate two types of seriation. If information on stratigraphic position of the samples is ignored, unconstrained seriation results and the samples are free to group in any order. This method usually results in sequences of taxa and samples that are allied closely to those derived from multivariate techniques such as cluster analysis. If the data on stratigraphic superposition of the samples are used, the result is constrained seriation in which the samples remain in stratigraphic order in the final matrix. Range charts derived from this type of seriation are most useful for stratigraphic correlation.  相似文献   

5.
Mathematical techniques for the computer classification of congenital abnormalities using metacarpophalageal lengths obtained from hand radiographs have been investigated. Discriminant analysis has been shown to be significantly better than similarity measures in distinguishing the normal condition, Down's syndrome, Turner's syndrome and achondroplasia from one another.  相似文献   

6.
Applying VSM and LCS to develop an integrated text retrieval mechanism   总被引:1,自引:0,他引:1  
Text retrieval has received a lot of attention in computer science. In the text retrieval field, the most widely-adopted similarity technique is using vector space models (VSM) to evaluate the weight of terms and using Cosine, Jaccard or Dice to measure the similarity between the query and the texts. However, these similarity techniques do not consider the effect of the sequence of the information. In this paper, we propose an integrated text retrieval (ITR) mechanism that takes the advantage of both VSM and longest common subsequence (LCS) algorithm. The key idea of the ITR mechanism is to use LCS to re-evaluate the weight of terms, so that the sequence and weight relationships between the query and the texts can be considered simultaneously. The results of mathematical analysis show that the ITR mechanism can increase the similarity on Jaccard and Dice similarity measurements when a sequential relationship exists between the query and the texts.  相似文献   

7.
Exploratory pattern analysis is an iterative process. In many instances, numerous iterations are required before a practical model or classification scheme can be concisely stated and adequately analyzed. There are at least three distinct functions that are required for solving the important and difficult pattern recognition problems. These are: (1) conceptualization of a classification model; (2) mathematical modeling and analyzing the practical and theoretical implications of the model; (3) testing the model on actual data. These tasks are interdependent and the investigation proceeds in what often appears to be an unsystematic approach to problem solving. This paper will address the third task and consequently, by association, hopefully affect the other two in a beneficial and constructive manner.The purpose of this article is to illustrate a general methodology, based on a matrix approach, that can be used in organizing, formatting and statistically analyzing classifier results. The discussion is intended for all individuals interested in analyzing pattern analysis and classification experiments, however, it should be of particular interest to those involved in designing interactive pattern recognition software packages. The discussion proceeds from a matrix algebra study of classifier results to techniques for statistical analysis using Cohen's kappa and Cochran's Q statistics. An example from nuclear medicine is used to illustrate the methodology.  相似文献   

8.
The Pythagorean fuzzy set (PFS) is characterized by two functions expressing the degree of membership and the degree of nonmembership, which square sum of them is equal or less than 1. It was proposed as a generalization of a fuzzy set to deal with indeterminate and inconsistent information. In this study, we shall present some novel Dice similarity measures of PFSs and the generalized Dice similarity measures of PFSs and indicates that the Dice similarity measures and asymmetric measures (projection measures) are the special cases of the generalized Dice similarity measures in some parameter values. Then, we propose the generalized Dice similarity measures-based multiple attribute group decision-making models with Pythagorean fuzzy information. Then, we apply the generalized Dice similarity measures between PFSs to multiple attribute group decision making. Finally, an illustrative example is given to demonstrate the efficiency of the similarity measures for selecting the desirable ERP system.  相似文献   

9.
Vectorcardiogram (VCG) data often are analyzed using the Karhunen-Loéve expansion of the sample covariance matrix, S, as a method for discriminating between the VCG's of healthy pat unhealthy patients. The estimator, S, however, can be seriously effected by both atypical observations and the number of VCG's in the database relative to their dimension. In this paper it is shown that alternative robust estimators of the covariance matrix are appealing in analyzing VCG data when outliers are present in the sample. Also, it is demonstrated that sample sizes in such experiments should be greatly expanded in order to validate the asymptotic properties of S.  相似文献   

10.
An application of the finite element method to the theory of thin walled bars of variable cross sections has been presented in this paper. A solution of this problem is based on the linear membrane shell theory with the application of Vlasov's assumptions. A bar is divided into elements along its longitudinal axis and then, a shell mid-surface of the element is approximated by arbitrary triangular Subelements. Nodal displacements of the element are assumed to be polynomials of the third order and the equivalent stiffness matrix is obtained. Calculated nodal displacements enable an analysis of normal and shearing stresses.  相似文献   

11.
Most organizations now have substantial investments in their online Internet presences. For major financial institutions and retailers, the Internet provides both a cost effective means of presenting their offerings to customers, and a method of delivering a personalised 24/7 presence. In almost all cases, the preferred method of delivering these services is over common HTTP. Due to limitations within the protocol, there is no in-built facility to identify or track a particular customer (or session) uniquely within an application. Thus the connection between the customer’s Web browser and the organisation's Web service is commonly referred to as being “stateless”. Because of this, organizations have been forced to adopt custom methods of managing client sessions if they wish to maintain state.An important aspect of correctly managing state information through session IDs relates directly to authentication processes. While it is possible to insist that a client using an organization's Web application provide authentication information for each “restricted” page or data submission, it would soon become tedious and untenable. Thus session IDs are not only used to follow clients throughout the Web application, they are also used to identify each unique, authenticated user — thereby indirectly regulating access to site content or information.  相似文献   

12.
Correlation analysis is regarded as a significant challenge in the mining of multidimensional data streams. Great emphasis is generally placed on one-dimensional data streams with the existing correlation analysis methods for the mining of data streams. Therefore, the identification of underlying correlation among multivariate arrays (e.g. Sensor data) has long been ignored. The technique of canonical correlation analysis (CCA) has rarely been applied in multidimensional data streams. In this study, a novel correlation analysis algorithm based on CCA, called ApproxCCA, is proposed to explore the correlations between two multidimensional data streams in the environment with limited resources. By introducing techniques of unequal probability sampling and low-rank approximation to reduce the dimensionality of the product matrix composed by the sample covariance matrix and sample variance matrix, ApproxCCA successfully improves computational efficiency while ensuring the analytical precision. Experimental results of synthetic and real data sets have indicated that the computational bottleneck of traditional CCA can be overcome with ApproxCCA, and the correlations between two multidimensional data streams can also be detected accurately.  相似文献   

13.
During the design process, the engineer frequently makes modifications to the structure. At times, the modification involves topological changes such as joint addition or joint deletion which result in an increase or a decrease in the size of the stiffness matrix. These modifications may not lead to a completely different configuration. In such cases the results of a previous analysis can be used to decrease the computational effort which is needed for a complete reanalysis of the modified structure.This paper presents two algorithms for obtaining the inverse of the modified stiffness matrix. The algorithms use the results of a previous solution to obtain solutions when joints are added to or deleted from a structure. The algorithm for joint addition uses Householder's identity to obtain the inverse of that portion of the modified stiffness matrix corresponding to the original stiffness matrix. The inverse of the modified stiffness matrix is then obtained by inversion by matrix partitioning. The algorithm for joint deletion uses the extraction of the inverse of a reduced matrix and Householder's identity to obtain the inverse of the modified stiffness matrix. A comparison of operation counts for the proposed algorithms and a complete inversion indicates that by using the proposed methods a 20–80 per cent saving in the computational effort is achieved.  相似文献   

14.
APL software has been developed for the simulation of seismic sections based on geological models in two dimensions. The principle of zero-offset raypath tracing has been used in developing the software. Information regarding depth, porosity, matrix velocity, matrix density, fluid saturation, and fluid density of the subsurface formations at different locations are input in an interactive mode. The seismic section is produced based on these data. The simulated seismic sections may be used in clarifying the possible structure and stratigraphy that may be prevailing in the subsurface.  相似文献   

15.

In this paper, we introduced some similarity measures for bipolar neutrosophic sets such as; Dice similarity measure, weighted Dice similarity measure, Hybrid vector similarity measure and weighted Hybrid vector similarity measure. Also we examine the propositions of the similarity measures. Furthermore, a multi-criteria decision-making method for bipolar neutrosophic set is developed based on these given similarity measures. Then, a practical example is shown to verify the feasibility of the new method. Finally, we compare the proposed method with the existing methods in order to demonstrate the practicality and effectiveness of the developed method in this paper.

  相似文献   

16.
The sample mean and sample covariance matrix are unbiased and consistent estimates of population mean and covariance matrix only if the samples are independent. In practical applications of Bayes' procedure these estimates are used in place of population means and covariance matrices on the assumption of independence among the training samples. This practice has often given, especially in remote sensing data analysis, misclassification probabilities much higher than that can be accounted for theoretically. The reason may be that the assumption of independence may not be valid. In reality, the samples are rarely independent, they are rather dependent, at best equicorrelated. This paper investigates how such intraclass correlation among the training samples affects the misclassification probabilities of the Bayes' procedure.  相似文献   

17.
The program inputs any number of whole-rock analyses, with up to 28 prescribed elements determined. These may be in any order, as oxide or element percentages, and may contain missing data. From the oxide weight percentages, correlation, regression and principal component analysis can be performed. Molecular proportions are computed and from the MCIPW norms and Niggli numbers may be calculated. Cation proportions then are computed and Barth's standard cell, basis components, molecular norms and Barth's mesonorms (for oversaturated rocks) may be generated. Line-printer X-Y graphs, X-Y-Z triangular diagrams, or histograms can be generated from any chosen set of parameters. Operation of the program requires no previous computer experience, but the competent user readily could extend the available options.  相似文献   

18.
The scaling algorithms presented in this paper form the second part of the RASC program for ranking and scaling of biostratigraphic events and other events which can be identified uniquely. An optimum sequence constructed by a ranking algorithm provides the starting point for estimating average “distances” between successive events. Frequency of crossover (mismatch) of events in sections is used for this purpose. Distances are clustered by constructing a dendrogram which can be used as a standard and permits definition of assemblage zones. Options of the second part of the RASC program include the following: (i) Normality test option: Each section is compared to the standard. This allows the detection of events which in a given section occur significantly above or below their average position in the standard. (ii) Marker horizon option: Each stratigraphic event is assumed to satisfy a normal probability curve with equal variance along the relative time axis. Chronostratigraphic (marker) horizons such as bentonite beds resulting from volcanic ash falls are assigned zero variance if the marker horizon option is used. (iii) Unique event option: After ranking and scaling the events for all more abundant fossil species, a rare (unique) event can be entered into the optimum sequence by comparing its position in a single or few sections to those of the more abundant taxa.  相似文献   

19.
In the analysis of rocket and missiles structures one frequently encounters cylindrical and cornica' shells. A simple finite element which fits the above configuration is obviously a conical shell finite element. In this paper stiffness matrix for a conical shell finite element is derived using Novozhilov's strain-displacement relations for a conical shell. Numerical integration is carried out to ge. the stiffness matrix. The element has 28 degrees-of-freedom and is nonconforming. An eigenvalue analysis of the stiffness matrix showed that it contains all the rigid body modes (six in this case) adequately, which is one of the convergence criteria. An advantage of this element is that a cylindrical shell, an annular segment flat plate, a rectangular flat plate elements can easily be obtained as degenerate cases. The effectiveness of this element is shown through a variety of numerical examples pertaining to annular plate, cylindrical shell and conical shell problems. Comparison of the present solution is made with the existing ones wherever possible. The comparison shows that the present element is superior in some respects to the existing elements  相似文献   

20.
郝燕玲  王众 《自动化学报》2008,34(12):1475-1482
提出一种基于核方法的下视等分辨率景象匹配算法. 通过模拟电荷吸引模型, 提出了计算不等维高维数据相似度的SNN核函数. 将图像中的特征点映射到径向基向量(Radial basis vector, RBV)空间, 利用SNN核函数计算两个特征点集的相似度及过渡矩阵. 利用置换测试模块来增强SNN核的稳定性, 以确保输出解的可靠性. 实验证明, 基于SNN核的景象匹配算法对图象畸变、噪声干扰与信号缺失具有很强的鲁棒性, 并可保证高精度与高实时性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号