首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
PETROS is a fixed-format magnetic tape data bank of major-element chemical analyses of igneous rocks divided into groups representing selected geographic areas and petrologic provinces. The 20,000 analyses and additional calculated average igneous rock compositions may be used for a variety of computer-based research and teaching applications. Interactive programs greatly expand the accessibility and usefulness of PETROS.  相似文献   

2.
The Irvine and Baragar classification of volcanic rocks is based on chemical composition and uses for the most part existing and accepted terminology. Volcanic rocks are classified by use of silica and alumina contents, AMF values, normative color index, An content of plagioclase, ratio of normative Or: Ab: An, and normative olivine. The computer program for the classification of a volcanic rock using the Irvine and Baragar technique is written in FORTRAN IV, and is divided into three sections: (1) preparation of the chemical analysis—the analysis is adjusted for excess ferrous iron and corrected for volatiles: (2) calculation of a molecular norm (Niggli-Barth Norm): and (3) classification of the volcanic rock according to the Irvine and Baragar scheme The program output indicates the major chemical group (subalkaline or alkaline), the chemical series (tholeiitic, calc-alkaline, and sodic or potassic alkaline rocks), and the rock name as well as the plot used to determine each label. This program using the Irvine and Baragar technique realistically and reliably can name a volcanic rock.  相似文献   

3.
Soil temperature influences most physical, chemical and biological processes that occur in soil. Analytical models (AMs)—exact analytical solutions of the soil heat flux equation—only exist to very specific conditions. Finite-difference models (FDMs) are applicable to more general conditions. Since FDMs are not exact, but approximate, they must be preliminarily tested by comparing their simulated results with those from AMs, if available. In previous literature, such tests have been done briefly, mainly through data-plots. The general objective of this study was the development of comprehensive procedures to perform such tests: (i) comparison of models by data-plots and statistical indices; (ii) searching for sources of apparent errors (AEs)—a concept introduced here; (iii) determination of the number of simulated periods (years) that the model computer program should run before output is recorded; and (iv) studying the influence of time-step and space-step magnitudes on the model performance. An explicit one-dimensional FDM was tested. Simulations represented soils of different thermal properties. Simulation results showed that: (i) statistical indices values consistently quantified the qualitative data-plot observations; (ii) AEs could be corrected by considerations on the first soil-layer temperature and by choosing a sufficiently large simulation depth; (iii) no more than five periods are necessary for the FDM tested here; and (iv) adequate choice of time and space steps reduces errors and also reduces computing time.  相似文献   

4.
5.
Digital geological maps of New Zealand (QMAP) are combined with 9256 samples with rock density measurements from the national rock catalogue PETLAB and supplementary geological sources to generate a first digital density model of New Zealand. This digital density model will be used to compile a new geoid model for New Zealand. The geological map GIS dataset contains 123 unique main rock types spread over more than 1800 mapping units. Through these main rock types, rock densities from measurements in the PETLAB database and other sources have been assigned to geological mapping units. A mean surface rock density of 2440 kg/m3 for New Zealand is obtained from the analysis of the derived digital density model. The lower North Island mean of 2336 kg/m3 reflects the predominance of relatively young, weakly consolidated sedimentary rock, tephra, and ignimbrite compared to the South Island’s 2514 kg/m3 mean where igneous intrusions and metamorphosed sedimentary rocks including schist and gneiss are more common. All of these values are significantly lower than the mean density of the upper continental crust that is commonly adopted in geological, geophysical, and geodetic applications (2670 kg/m3) and typically attributed to the crystalline and granitic rock formations. The lighter density has implications for the calculation of the geoid surface and gravimetric reductions through New Zealand.  相似文献   

6.
In this paper we present a new local remeshing algorithm that is dedicated to the problem of erosion in finite element models whose grid follows the movement of the free surface. The method, which we name Surface Lagrangian Remeshing (SLR), is adapted to 2D Lagrangian models which couple surface erosion with deformation of Earth materials. The remeshing procedure preserves nodes defining the surface submitted to erosion and removes nodes belonging to surface elements whose internal angles or area is critically low. This algorithm is ideally suited to track long term surface evolution. To validate the method we perform a set of numerical tests, using triangular finite elements, which compare the results obtained with the SLR algorithm with global remeshing and with analytical results. The results show good agreements with analytical solutions. Interpolation errors associated with remeshing are generated locally and numerical diffusion is restricted to the remeshed domain itself. In addition this method is computationally costless compared to classical global remeshing algorithm. We propose to couple the SLR method with the Dynamical Lagrangian Remeshing (DLR) algorithm to enable local remeshing only of Lagrangian models coupling large deformation of Earth materials with large erosion.  相似文献   

7.
为了减少分词的负面效果,提出了基于用字共现频率统计的外国译名自动识别方法.对译名的用字特征进行了统计,提出译名共现字串的概念,并由译名用字表与汉语常用字表得到了非译名用字表.在上述工作的基础上定义了译名的边界,在边界定义的基础上设计了一种对分词错误的调整方法.对开放语料的测试结果表明,与最大词频分词算法相比,该算法在译名识别中的准确率、召回率、F值均有所提高.  相似文献   

8.
The unconfined compressive strength (UCS) of rocks is an important design parameter in rock engineering and geotechnics, which is required and determined for rock mechanical studies in mining and civil projects. This parameter is usually determined through a laboratory UCS test. Since the preparation of high-quality samples is difficult, expensive and time consuming for laboratory tests, development of predictive models for determining the mechanical properties of rocks seems to be essential in rock engineering. In this study, an attempt was made to develop an artificial neural network (ANN) and multivariable regression analysis (MVRA) models in order to predict UCS of rock surrounding a roadway. For this, a database of laboratory tests was prepared, which includes rock type, Schmidt hardness, density and porosity as input parameters and UCS as output parameter. To make a database (including 93 datasets), different rock samples, ranging from weak to very strong types, are used. To compare the performance of developed models, determination coefficient (R 2), variance account for (VAF), mean absolute error (E a) and mean relative error (E r) indices between predicted and measured values were calculated. Based on this comparison, it was concluded that performance of the ANN model is considerably better than the MVRA model. Further, a sensitivity analysis shows that rock density and Schmidt hardness were recognized as the most effective parameters, whereas porosity was considered as the least effective input parameter on the ANN model output (UCS) in this study.  相似文献   

9.
针对星载软件系统因宇宙射线和环境扰动而产生的软件错误及错误传播问题,研究星载软件系统错误传播分析方法.利用该方法从信号和模块2个层面评测软件的可靠性,并根据结果对系统信号或模块的脆弱性进行分析,找出系统较为脆弱的信号与模块,以及最可能传播错误的信号传播路径.定义信号与模块的错误传播率、暴露率等参数,设计参数的计算方法,...  相似文献   

10.
郭振华  岳红  王宏 《计算机仿真》2005,22(11):91-94
基于最小均方误差的主元分析和主元神经网络是有效的多变量降维统计技术,它们所提取的主元含有系统最大方差.非高斯随机系统的近似模型应当含有系统最大信息熵,但包含最大方差并不一定包含最大信息熵.该文提出一种以最小残差熵为通用指标的非线性主元神经网络模型,并给出了一种基于Parzen窗口密度函数估计的熵近似计算方法和网络学习算法.然后从信息论角度分析了,在高斯随机系统中基于最小残差熵和最小均方差为指标的主元网络学习结果具有一致性.最后以仿真验证该方法的有效性,并与基于最小均方误差的主元分析和主元神经网络方法的计算结果进行对比性分析.  相似文献   

11.
A modified subgradient algorithm is presented for the generalized assignment problem, which, like the classical assignment problem, is concerned with the minimum cost assignment of agents to jobs. The generalized assignment problem, however, permits differences in job performance efficiencies among agents and thereby allows the possibility that each agent may be assigned more than a single job, as long as each job is ultimately assigned and the total resources available to every agent are not exceeded. A two stage heuristic algorithm using a modified subgradient approach and branch and bound is developed for solving the problem. By computing step sizes precisely and using the dual as a bound, the algorithm is shown to be particularly effective and easy to program and implement. A numerical example is presented to illustrate the model and method, and computational experience is cited for problems containing up to 12,000 0–1 variables.  相似文献   

12.
Least-square fitting of petrological and geochemical overdetermined linear systems is carried out by means of a rapidly converging algorithm which allows a general treatment of coefficient uncertainties. Both constrained and unconstrained solutions are presented. The modal analysis of lunar rock 10022 and stoichiometric coefficients for two coronitic reactions are so determined, as an example, from application of the proposed procedure to available chemical analyses.  相似文献   

13.
In this paper, a kernelized version of clustering-based discriminant analysis is proposed that we name KCDA. The main idea is to first map the original data into another high-dimensional space, and then to perform clustering-based discriminant analysis in the feature space. Kernel fuzzy c-means algorithm is used to do clustering for each class. A group of tests on two UCI standard benchmarks have been carried out that prove our proposed method is very promising.  相似文献   

14.
Practical and economic constraints prompt the need of obtaining structural geological information with reduced field effort. This paper presents a methodological strategy for deriving such information from remotely sensed (RS) images coupled with empirical tectonic models as a way of bridging from regional to local scales. The hypothesis that spatial organisation displayed by small-scale tectonic structures (and not only the large ones) could be correlated with the arrangement of natural linear features observed on RS imagery to allow inferences on the local geological structure was tested.Azimuth direction and subsidiarily length were found to be the most appropriate attributes for spatial characterisation and comparative analyses of line sets. Inferences made of tectonic structures and respective directional arrangements were based on a combination of qualitative (visual analysis of histograms) and statistical methods (non-parametric goodness-of-fit tests).The numerical evaluation of the results of tests expressed in terms of average degree of matching (91% to 95%) and errors (5% of omission errors and 31.2% of commission errors) showed a reasonable efficiency of the inferential approach in predicting the structural geological settings in lithological units as well as in mid-size areas (50 to 80 km2 approximately). Potential applications of the inferential approach in terrain evaluation schemes, particularly for planning and engineering-related purposes, are envisaged.  相似文献   

15.
Daily numerical data entry is subject to human errors, and errors in numerical data can cause serious losses in health care, safety and finance. Difficulty in detecting errors by human operators in numerical data entry necessitates an early error detection/prediction mechanism to proactively prevent severe accidents. To explore the possibility of using multi-channel electroencephalography (EEG) collected before movements/reactions to detect/predict human errors, linear discriminant analysis (LDA) classifier was utilised to predict numerical typing errors before their occurrence in numerical typing. Single trial EEG data were collected from seven participants during numerical hear-and-type tasks and three temporal features were extracted from six EEG sites in a 150-ms time window. The sensitivity of LDA classifier was revealed by adjusting the critical ratio of two Mahalanobis distances as a classification criterion. On average, the LDA classifier was able to detect 74.34% of numerical typing errors in advance with only 34.46% false alarms, resulting in a sensitivity of 1.05. A cost analysis also showed that using the LDA classifier would be beneficial as long as the penalty is at least 15 times the cost of inspection when the error rate is 5%. LDA demonstrated its realistic potential in detecting/predicting relatively few errors in numerical data without heavy pre-processing. This is one step towards predicting and preventing human errors in perceptual-motor tasks before their occurrence.  相似文献   

16.
17.
This paper proposes a hybrid algorithm to find optimal locations for dampers within a structural system. In this approach Harmony Search is augmented by a probability mass function which utilizes two concepts borrowed from Ant Colony Optimization algorithm: pheromone and heuristic values. The former is a dynamic weight factor assigned to each solution component and the latter is a constant one defined based on modal analysis of the structural system. Provided numerical examples show that this hybrid search scheme improves the convergence rate such that the algorithm finds qualified solutions in fewer number of iterations.  相似文献   

18.
N. Uddin 《Computers & Structures》1997,64(5-6):1175-1182
Estimation of the permanent deformations of embankment dams is, in practice, based upon the simplifying assumption that dynamic-acceleration response and wedge sliding are two separate processes (decoupled ‘elastic’ and ‘rigid-slip’ features of the dynamic response). An alternative hypothesis is proposed in this paper, namely that these two processes occur simultaneously. To this end, the dam is assumed to contain an a priori assigned, potentially-sliding interface, and the dynamic response is computed in a single step. As a validation of the new single-step procedure, a numerical analysis is carried out and shown to successfully explain the asymmetric response of La Villita Dam recorded during the Mexico earthquake of 15 September, 1985. All analyses are performed with ADINA using special interface elements to model the slip and Newmark's time-integration algorithm for a direct step-by-step solution.  相似文献   

19.
传统基于用户预估的执行时间通常准确性较差。结合分类和基于实例的学习方法,综合使用模板相似和数值相似方法,在历史调度数据中获取当前作业的相似作业,并使用其历史信息预测当前作业执行时间。使用调度历史中的用户名、分组名、队列名、应用名、用户请求处理器数、用户请求(预估)执行时间和用户请求内存量等属性进行训练和预测,算法中涉及的参数使用遗传算法确定。数值实验表明,相较于已有文献,本方法在使用更少参数的前提下得到了与文献结果中相近的低估率,并获得了更低的平均绝对误差。在HPC2N04和HPC2N05日志数据集上,平均绝对误差分别降低了43%和77%。研究了使用在线预测替换用户估计对作业调度的影响,对结果进行了初步分析并指出了今后的改进方向。  相似文献   

20.
Three numerical algorithms for computing the solution of the covariance matrix differential equations of states of a linear time-invariant dynamical system forced by white Gaussian noise are analyzed. Estimates of errors due to truncation and roundoff are derived for each algorithm. The error analyses are based on the assumption that computation is performed in floating point mode and that it is not numerically ill-conditioned. Computational complexity of each algorithm is also discussed. Two numerical examples are presented to evaluate the performance of each algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号