首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Uniaxial compressive strength (UCS) of rock is crucial for any type of projects constructed in/on rock mass. The test that is conducted to measure the UCS of rock is expensive, time consuming and having sample restriction. For this reason, the UCS of rock may be estimated using simple rock tests such as point load index (I s(50)), Schmidt hammer (R n) and p-wave velocity (V p) tests. To estimate the UCS of granitic rock as a function of relevant rock properties like R n, p-wave and I s(50), the rock cores were collected from the face of the Pahang–Selangor fresh water tunnel in Malaysia. Afterwards, 124 samples are prepared and tested in accordance with relevant standards and the dataset is obtained. Further an established dataset is used for estimating the UCS of rock via three-nonlinear prediction tools, namely non-linear multiple regression (NLMR), artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). After conducting the mentioned models, considering several performance indices including coefficient of determination (R 2), variance account for and root mean squared error and also using simple ranking procedure, the models were examined and the best prediction model was selected. It is concluded that the R 2 equal to 0.951 for testing dataset suggests the superiority of the ANFIS model, while these values are 0.651 and 0.886 for NLMR and ANN techniques, respectively. The results pointed out that the ANFIS model can be used for predicting UCS of rocks with higher capacity in comparison with others. However, the developed model may be useful at a preliminary stage of design; it should be used with caution and only for the specified rock types.  相似文献   

2.
Given a text T[1..u] over an alphabet of size σ, the full-text search problem consists in finding the occ occurrences of a given pattern P[1..m] in T. In indexed text searching we build an index on T to improve the search time, yet increasing the space requirement. The current trend in indexed text searching is that of compressed full-text self-indices, which replace the text with a more space-efficient representation of it, at the same time providing indexed access to the text. Thus, we can provide efficient access within compressed space. The Lempel-Ziv index (LZ-index) of Navarro is a compressed full-text self-index able to represent T using 4uH k (T)+o(ulog?σ) bits of space, where H k (T) denotes the k-th order empirical entropy of T, for any k=o(log? σ u). This space is about four times the compressed text size. The index can locate all the occ occurrences of a pattern P in T in O(m 3log?σ+(m+occ)log?u) worst-case time. Although this index has proven very competitive in practice, the O(m 3log?σ) term can be excessive for long patterns. Also, the factor 4 in its space complexity makes it larger than other state-of-the-art alternatives. In this paper we present stronger Lempel-Ziv based indices (LZ-indices), improving the overall performance of the original LZ-index. We achieve indices requiring (2+ε)uH k (T)+o(ulog?σ) bits of space, for any constant ε>0, which makes them the smallest existing LZ-indices. We simultaneously improve the search time to O(m 2+(m+occ)log?u), which makes our indices very competitive with state-of-the-art alternatives. Our indices support displaying any text substring of length ? in optimal O(?/log? σ u) time. In addition, we show how the space can be squeezed to (1+ε)uH k (T)+o(ulog?σ) to obtain a structure with O(m 2) average search time for m≥2log? σ u. Alternatively, the search time of LZ-indices can be improved to O((m+occ)log?u) with (3+ε)uH k (T)+o(ulog?σ) bits of space, which is much less than the space needed by other Lempel-Ziv-based indices achieving the same search time. Overall our indices stand out as a very attractive alternative for space-efficient indexed text searching.  相似文献   

3.
Sufficient conditions of existence and uniqueness of α-bounded and bounded solutions to the difference equation with advancedd arguments x(n + 1) = A(n)x(n) + B(n)x(σ1(n)) + f(n, x(n), x(σ2(n)), σi(n) ⩾ n + 1, i = 1,2, are given. It is proven that under certain conditions it is possible to find positive numbers R, μ, such that from every initial condition ξ satisfying ∥ξ∥ ⩽ R, a unique bounded solution, belonging to the ball ∥x∥ ⩽ μ, starts.  相似文献   

4.
Sensitivity analysis studies how the variation in model outputs can be due to different sources of variation. This issue is addressed, in this study, as an application of sensitivity analysis techniques to a crop model in the Mediterranean region. In particular, an application of Morris and Sobol' sensitivity analysis methods to the rice model WARM is presented. The output considered is aboveground biomass at maturity, simulated at five rice districts of different countries (France, Greece, Italy, Portugal, and Spain) for years characterized by low, intermediate, and high continentality. The total effect index of Sobol' (that accounts for the total contribution to the output variation due a given parameter) and two Morris indices (mean μ and standard deviation σ of the ratios output changes/parameter variations) were used as sensitivity metrics. Radiation use efficiency (RUE), optimum temperature (Topt), and leaf area index at emergence (LAIini) ranked in most of the combinations site × year as first, second and third most relevant parameters. Exceptions were observed, depending on the sensitivity method (e.g. LAIini resulted not relevant by the Morris method), or site-continentality pattern (e.g. with intermediate continentality in Spain, LAIini and Topt were second and third ranked; with low continentality in Portugal, RUE was outranked by Topt). Low σ values associated with the most relevant parameters indicated limited parameter interactions. The importance of sensitivity analyses by exploring site × climate combinations is discussed as pre-requisite to evaluate either novel crop-modelling approaches or the application of known modelling solutions to conditions not explored previously. The need of developing tools for sensitivity analysis within the modelling environment is also emphasized.  相似文献   

5.
High voltage insulators form an essential part of the high voltage electric power transmission systems. Any failure in the satisfactory performance of high voltage insulators will result in considerable loss of capital, as there are numerous industries that depend upon the availability of an uninterrupted power supply. The importance of the research on insulator pollution has been increased considerably with the rise of the voltage of transmission lines. In order to determine the flashover behavior of polluted high voltage insulators and to identify to physical mechanisms that govern this phenomenon, the researchers have been brought to establish a modeling. Artificial neural networks (ANN) have been used by various researches for modeling and predictions in the field of energy engineering systems. In this study, model of VC = f (H, D, L, σ, n, d) based on ANN which compute flashover voltage of the insulators were performed. This model consider height (H), diameter (D), total leakage length (L), surface conductivity (σ) and number of shed (d) of an insulator and number of chain (n) on the insulator.  相似文献   

6.
We describe an algorithm to evaluate a wide class of functions and their derivatives, to extreme precision (25–30S) if required, which does not use any function calls other than square root. The functions are the Coulomb functions of positive argument (Fλ(x, η), Gλ(x, η), x > 0, η, λ real) and hence, as special cases with η = 0, the cylindrical Bessel functions (Jμ(x), Yμ(x), x > 0, μ real), the spherical Bessel functions (iλ(x), yλ(x), x > 0, λ real), Airy functions of negative argument Ai(-x), Bi(-x) and others. The present method has a number of attractive features: both the regular and irregular solution are calculated, all others of the functions can be produced from a specified minimum (not necessarily zero) to a specified maximum, functions of a single order can be found without all of the orders from zero, the derivatives of the functions arise naturally in the solution and are readily available, the results are available to different precisions from the same subroutine (in contrast to rational approximation techniques) and the methods can be used for estimating final accuracies. In addition, the sole constant required in the algorithm is π, no precalculated arrays of coefficients are needed, and the final accuracy is not dependent on that of other subroutines. The method works most efficiently in the region x ≈ 0.5 to x ≈ 1000 but outside this region the results are still reliable, even though the number of iterations within the subroutine rises. Even in these more asymptotic regions the unchanged algorithm can be used with known accuracy to test other specific subroutines more appropriate to these regions. The algorithm uses the recursion relations satisfied by the Coulomb functions and contains a significant advance over Miller's method for evaluating the ratio of successive minimal solutions (Fλ+1/Fλ). It relies on the evaluation of two continued fractions and no infinite series is required for normalisation: instead the Wronskian is used.  相似文献   

7.
Finding the longest common subsequence (LCS) of two given sequences A=a0a1am−1 and B=b0b1bn−1 is an important and well studied problem. We consider its generalization, transposition-invariant LCS (LCTS), which has recently arisen in the field of music information retrieval. In LCTS, we look for the LCS between the sequences A+t=(a0+t)(a1+t)…(am−1+t) and B where t is any integer. We introduce a family of algorithms (motivated by the Hunt-Szymanski scheme for LCS), improving the currently best known complexity from O(mnloglogσ) to O(Dloglogσ+mn), where σ is the alphabet size and D?mn is the total number of dominant matches for all transpositions. Then, we demonstrate experimentally that some of our algorithms outperform the best ones from literature.  相似文献   

8.
For a (molecular) graph, the first Zagreb index M1 is equal to the sum of the squares of the degrees of the vertices, and the second Zagreb index M2 is equal to the sum of the products of the degrees of pairs of adjacent vertices. If G is a connected graph with vertex set V(G), then the eccentric connectivity index of G, ξC(G), is defined as, ∑viV(G)diei, where di is the degree of a vertex vi and ei is its eccentricity. In this report we compare the eccentric connectivity index (ξC) and the Zagreb indices (M1 and M2) for chemical trees. Moreover, we compare the eccentric connectivity index (ξC) and the first Zagreb index (M1) for molecular graphs.  相似文献   

9.
The unconfined compressive strength (UCS) of rocks is an important design parameter in rock engineering and geotechnics, which is required and determined for rock mechanical studies in mining and civil projects. This parameter is usually determined through a laboratory UCS test. Since the preparation of high-quality samples is difficult, expensive and time consuming for laboratory tests, development of predictive models for determining the mechanical properties of rocks seems to be essential in rock engineering. In this study, an attempt was made to develop an artificial neural network (ANN) and multivariable regression analysis (MVRA) models in order to predict UCS of rock surrounding a roadway. For this, a database of laboratory tests was prepared, which includes rock type, Schmidt hardness, density and porosity as input parameters and UCS as output parameter. To make a database (including 93 datasets), different rock samples, ranging from weak to very strong types, are used. To compare the performance of developed models, determination coefficient (R 2), variance account for (VAF), mean absolute error (E a) and mean relative error (E r) indices between predicted and measured values were calculated. Based on this comparison, it was concluded that performance of the ANN model is considerably better than the MVRA model. Further, a sensitivity analysis shows that rock density and Schmidt hardness were recognized as the most effective parameters, whereas porosity was considered as the least effective input parameter on the ANN model output (UCS) in this study.  相似文献   

10.
Systematic first-principles calculations of energy vs. volume (E-V) and single crystal elastic stiffness constants (cij’s) have been performed for 50 Al binary compounds in the Al-X (X = Co, Cu, Hf, Mg, Mn, Ni, Sr, V, Ti, Y, and Zr) systems. The E-V equations of state are fitted by a four-parameter Birch-Murnaghan equation, and the cij’s are determined by an efficient strain-stress method. The calculated lattice parameters, enthalpies of formation, and cij’s of these binary compounds are compared with the available experimental data in the literature. In addition, elastic properties of polycrystalline aggregates including bulk modulus (B), shear modulus (G), Young’s modulus (E), B/G (bulk/shear) ratio, and anisotropy ratio are calculated and compared with the experimental and theoretical results available in the literature. The systematic predictions of elastic properties and enthalpies of formation for Al-X compounds provide an insight into the understanding and design of Al-based alloys.  相似文献   

11.
12.
Accurate prediction of high performance concrete (HPC) compressive strength is very important issue. In the last decade, a variety of modeling approaches have been developed and applied to predict HPC compressive strength from a wide range of variables, with varying success. The selection, application and comparison of decent modeling methods remain therefore a crucial task, subject to ongoing researches and debates. This study proposes three different ensemble approaches: (i) single ensembles of decision trees (DT) (ii) two-level ensemble approach which employs same ensemble learning method twice in building ensemble models (iii) hybrid ensemble approach which is an integration of attribute-base ensemble method (random sub-spaces RS) and instance-base ensemble methods (bagging Bag, stochastic gradient boosting GB). A decision tree is used as the base learner of ensembles and its results are benchmarked to proposed ensemble models. The obtained results show that the proposed ensemble models could noticeably advance the prediction accuracy of the single DT model and for determining average determination of correlation, the best models for HPC compressive strength forecasting are GB–RS DT, RS–GB DT and GB–GB DT among the eleven proposed predictive models, respectively. The obtained results show that the proposed ensemble models could noticeably advance the prediction accuracy of the single DT model and for determining determination of correlation (R2max), the best models for HPC compressive strength forecasting are GB–RS DT (R2=0.9520), GB–GB DT (R2=0.9456) and Bag–Bag DT (R2=0.9368) among the eleven proposed predictive models, respectively.  相似文献   

13.
In the framework of a better territory risk assessment and decision making, numerical simulation can provide a useful tool for investigating the propagation phase of phenomena involving granular material, like rock avalanches, when realistic geological contexts are considered.Among continuum mechanics models, the numerical model SHWCIN uses the depth averaged Saint Venant approach, in which the avalanche thickness (H) is very much smaller than its extent parallel to the bed (L). The material is assumed to be incompressible and the mass and the momentum equations are written in a depth averaged form.The SHWCIN code, based on the hypothesis of isotropy of normal stresses (σxx = σyy = σzz), has been modified (new code: RASH3D) in order to allow for the assumption of anisotropy of normal stresses (σxx = zz; σyy = zz).A comparison among the results obtained by assuming isotropy or anisotropy is given through the back analysis of a set of laboratory experiments [Gray, J.M.N.T., Wieland, M., Hutter, K., 1999. Gravity-driven free surface flow of granular avalanches over complex basal topography. Proceedings of the Royal Society of London, Series A 455(1841)] and of a case history of rock avalanche (Frank slide, Canada).The carried out simulations have also underlined the importance of using a different earth pressure coefficient value (K) for directions of convergence and of divergence of the flux.  相似文献   

14.
I. Higueras 《Computing》1995,54(2):185-190
In this paper we show a result that ensures certain order for the local error Runge-Kutta methods for index 2 differential algebraic problems with the help of the simplifying conditionsB(p),C(q),D(r) andA 1(s) for the differential component andB(p), C(q), andA 2(s) for the algebraic component.  相似文献   

15.
This paper first defines the profitability to be the probability of achieving a target profit under the optimal ordering policy, and introduces a new index (achievable capacity index; IA) which can briefly analyze the profitability for newsboy-type product with normally distributed demand. Note that since the level of profitability depends on the demand mean μ and the demand standard deviation σ if the related costs, selling price, and target profit are given, the index IA is a function of μ and σ. Then, we assess level performance which examines if the profitability meets designated requirement. The results can determine whether the product is still desirable to order/manufacture. However, μ and σ are always unknown, and the demand quantity is common to be imprecise, especially for new product. To tackle these problems, a constructive approach combining the vector of fuzzy numbers is introduced to establish the membership function of the fuzzy estimator of IA. Furthermore, a three-decision testing rule and step-by-step procedure are developed to assess level performance based on fuzzy critical values and fuzzy p-values.  相似文献   

16.

Over the last decade, application of soft computing techniques has rapidly grown up in different scientific fields, especially in rock mechanics. One of these cases relates to indirect assessment of uniaxial compressive strength (UCS) of rock samples with different artificial intelligent-based methods. In fact, the main advantage of such systems is to readily remove some difficulties arising in direct assessment of UCS, such as time-consuming and costly UCS test procedure. This study puts an effort to propose four accurate and practical predictive models of UCS using artificial neural network (ANN), hybrid ANN with imperialism competitive algorithm (ICA–ANN), hybrid ANN with artificial bee colony (ABC–ANN) and genetic programming (GP) approaches. To reach the aim of the current study, an experimental database containing a total of 71 data sets was set up by performing a number of laboratory tests on the rock samples collected from a tunnel site in Malaysia. To construct the desired predictive models of UCS based on training and test patterns, a combination of several rock characteristics with the most influence on UCS has been used as input parameters, i.e. porosity (n), Schmidt hammer rebound number (R), p-wave velocity (Vp) and point load strength index (Is(50)). To evaluate and compare the prediction precision of the developed models, a series of statistical indices, such as root mean squared error (RMSE), determination coefficient (R2) and variance account for (VAF) are utilized. Based on the simulation results and the measured indices, it was observed that the proposed GP model with the training and test RMSE values 0.0726 and 0.0691, respectively, gives better performance as compared to the other proposed models with values of (0.0740 and 0.0885), (0.0785 and 0.0742), and (0.0746 and 0.0771) for ANN, ICA–ANN and ABC–ANN, respectively. Moreover, a parametric analysis is accomplished on the proposed GP model to further verify its generalization capability. Hence, this GP-based model can be considered as a new applicable equation to accurately estimate the uniaxial compressive strength of granite block samples.

  相似文献   

17.
This paper models acidolysis of triolein and palmitic acid under the catalysis of immobilized sn-1,3 specific lipase. A gene-expression programming (GEP), which is an extension to genetic programming (GP)-based model was developed for the prediction of the concentration of major reaction products of this reaction (1-palmitoyl-2,3-oleoyl-glycerol (POO), 1,3-dipalmitoyl-2-oleoyl-glycerol (POP) and triolein (OOO). Substrate ratio (SR), reaction temperature (T) and reaction time (t) were used as input parameters. The predicted models were able to predict the progress of the reactions with a mean standard error (MSE) of less than 1.0 and R of 0.978. Explicit formulation of proposed GEP models was also presented. Considerable good performance was achieved in modelling acidolysis reaction by using GEP. The predictions of proposed GEP models were compared to those of neural network (NN) modelling, and strictly good agreement was observed between the two predictions. Statistics and scatter plots indicate that the new GEP formulations can be an alternative to experimental models.  相似文献   

18.
Time series of satellite sensor-derived data can be used in the light use efficiency (LUE) model for gross primary productivity (GPP). The LUE model and a closely related linear regression model were studied at an ombrotrophic peatland in southern Sweden. Eddy covariance and chamber GPP, incoming and reflected photosynthetic photon flux density (PPFD), field-measured spectral reflectance, and data from the Moderate Resolution Imaging Spectroradiometer (MODIS) were used in this study. The chamber and spectral reflectance measurements were made on four experimental treatments: unfertilized control (Ctrl), nitrogen fertilized (N), phosphorus fertilized (P), and nitrogen plus phosphorus fertilized (NP). For Ctrl, a strong linear relationship was found between GPP and the photosynthetically active radiation absorbed by vegetation (APAR) (R2 = 0.90). The slope coefficient (εs, where s stands for “slope”) for the linear relationship between seasonal time series of GPP and the product of the normalized difference vegetation index (NDVI) and PPFD was used as a proxy for the light use efficiency factor (ε). There were differences in εs depending on the treatments with a significant effect for N compared to Ctrl (ANOVA: p = 0.042, Tukey's: p ≤ 0.05). Also, εs was linearly related to the cover degree of vascular plants (R2 = 0.66). As a sensitivity test, the regression coefficients (εs and intercept) for each treatment were used to model time series of 16-day GPP from the product of MODIS NDVI and PPFD. Seasonal averages of GPP were calculated for 2005, 2006, and 2007, which resulted in up to 19% higher average GPP for the fertilization treatments compared to Ctrl. The main conclusion is that the LUE model and the regression model can be applied in peatlands but also that temporal and spatial changes in ε or the regression coefficients should be considered.  相似文献   

19.
This study aims to predict the next day hourly average tropospheric ozone (O3) concentrations using genetic programming (GP). Due to the complexity of this problem, GP is an adequate methodology as it can optimize, simultaneously, the structure of the model and its parameters. It is an artificial intelligence methodology that uses the same principles of the Darwinian Theory of Evolution. GP enables the automatic generation of mathematical expressions that are modified following an iterative process applying genetic operations.The inputs of the models were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide (NO), nitrogen dioxide (NO2) and O3, and some meteorological variables (temperature – T; solar radiation – SR; relative humidity – RH; and wind speed – WS) measured 24 h before. GP was also applied to the principal components (PC) obtained from these variables. The analysed period was from May to July 2004 divided in training and test periods.GP was able to select the most relevant variables for prediction of O3 concentrations. The original variables, T, RH and O3 measured 24 h before were considered significant inputs for prediction. The selected PC had also important contributions of the same variables and of NO2. GP models using the original variables presented better performance in training period and worse performance in test period when compared with the models obtained using PC. The results achieved using the GP methodology demonstrated that it can be very useful to solve several environmental complex problems.  相似文献   

20.
Let A = (aij) be an n × n complex matrix. Suppose that G(A), the undirected graph of A, has no isolated vertex. Let E be the set of edges of G(A). We prove that the smallest singular value of A, σn, satisfies: σn ≥ min σij | (i, j) ∈ E, where gijai + aj − [(aiaj)2 + (ri + ci)(rj + cj)]1/2/2 with ai ≡ |aii| and ri,ci are the ith deleted absolute row sum and column sum of A, respectively. The result simplifies and improves that of Johnson and Szulc: σn ≥ minij σij. (See [1].)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号