首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Splitting tensile strength is one of the important mechanical properties of concrete that is used in structural design. In this paper, it is aimed to propose formulation for predicting cylinder splitting tensile strength of concrete by using gene expression programming (GEP). The database used for training, testing, and validation sets of the GEP models is obtained from the literature. The GEP formulations are developed for prediction of splitting tensile strength of concrete as a function of water-binder ratio, age of specimen, and 100-mm cube compressive strength. The training and testing sets of the GEP models are randomly selected from the complete experimental data. The GEP formulations are also validated with additional experimental data except from the data used in training and testing sets of the GEP models. GEP formulations’ results are compared with experimental results. Results of this study revealed that GEP formulations exhibited better performance to predict the splitting tensile strength of concrete.  相似文献   

2.
In this study, an artificial neural networks study was carried out to predict the compressive strength of ground granulated blast furnace slag concrete. A data set of a laboratory work, in which a total of 45 concretes were produced, was utilized in the ANNs study. The concrete mixture parameters were three different water–cement ratios (0.3, 0.4, and 0.5), three different cement dosages (350, 400, and 450 kg/m3) and four partial slag replacement ratios (20%, 40%, 60%, and 80%). Compressive strengths of moist cured specimens (22 ± 2 °C) were measured at 3, 7, 28, 90, and 360 days. ANN model is constructed, trained and tested using these data. The data used in the ANN model are arranged in a format of six input parameters that cover the cement, ground granulated blast furnace slag, water, hyperplasticizer, aggregate and age of samples and, an output parameter which is compressive strength of concrete. The results showed that ANN can be an alternative approach for the predicting the compressive strength of ground granulated blast furnace slag concrete using concrete ingredients as input parameters.  相似文献   

3.
In this study, a fuzzy logic prediction model for the bond strength of lightweight concrete containing mineral admixtures under different curing conditions was devised. A control concrete mixture containing only Portland cement, another mixture having fly ash replacing 15% by mass of cement, and a third mixture having silica fume replacing 10% by mass of cement are produced, and all specimens from these three mixtures are cured in three different conditions, which are: (1) in water tank of 20 ± 2 °C, (2) sealed in plastic bags in the laboratory, and (3) in air in the laboratory. At the end of each curing period, three specimens out of each concrete combination and curing condition were tested for compressive and bond strengths, and the average of three values were taken. The results obtained from the fuzzy logic prediction model were compared with the average results of the experiments, and they were found to be remarkably close to each other. The results show that the fuzzy logic can be used to predict bond strength of lightweight concrete.  相似文献   

4.
This study applies multiple regression analysis and an artificial neural network in estimating the compressive strength of concrete that contains various amounts of blast furnace slag and fly ash, based on the properties of the additives (blast furnace slag and fly ash in this case) and values obtained by non-destructive testing rebound number and ultrasonic pulse velocity for 28 different concrete mixtures (Mcontrol and M1–M27) at different curing times (3, 7, 28, 90, and 180 days). The results obtained using the two methods are then compared and discussed. The results reveal that although multiple regression analysis was more accurate than artificial neural network in predicting the compressive strength using values obtained from non-destructive testing, the artificial neural network models performed better than did multiple regression analysis models. The application of an artificial neural network to the prediction of the compressive strength in admixture concrete of various curing times shows great potential in terms of inverse problems, and it is suitable for calculating nonlinear functional relationships, for which classical methods cannot be applied.  相似文献   

5.
A new design equation is proposed for the prediction of shear strength of reinforced concrete (RC) beams without stirrups using an innovative linear genetic programming methodology. The shear strength was formulated in terms of several effective parameters such as shear span to depth ratio, concrete cylinder strength at date of testing, amount of longitudinal reinforcement, lever arm, and maximum specified size of coarse aggregate. A comprehensive database containing 1938 experimental test results for the RC beams was gathered from the literature to develop the model. The performance and validity of the model were further tested using several criteria. An efficient strategy was considered to guarantee the generalization of the proposed design equation. For more verification, sensitivity and parametric analysis were conducted. The results indicate that the derived model is an effective tool for the estimation of the shear capacity of members without stirrups (R = 0.921). The prediction performance of the proposed model was found to be better than that of several existing buildings codes.  相似文献   

6.
In order to detect the installation compressive stress and monitor the stress relaxation between two bending surfaces on a defensive furnishment, a wireless compressive-stress/relaxation-stress measurement system based on pressure-sensitive sensors is developed. The flexible pressure-sensitive stress sensor array is fabricated by using carbon black-filled silicone rubber-based composite. The wireless stress measurement system integrated with this sensor array is tested with compressive stress in the range from 0 MPa to 3 MPa for performance evaluation. Experimental results indicate that the fractional change in electrical resistance of the pressure-sensitive stress sensor changes linearly and reversibly with the compressive stress, and its fractional change goes up to 355% under uniaxial compression; the change rate of the electrical resistance can track the relaxation stress and give out a credible measurement in the process of stress relaxation. The relationship between input (compressive stress) and output (the fractional change in electrical resistance) of the pressure-sensitive sensor is ΔR/R0 = σ × 1.2 MPa?1. The wireless compressive stress measurement system can be used to achieve sensitivity of 1.33 V/MPa to the stress at stress resolution of 920.3 Pa. The newly developed wireless stress measurement system integrated with pressure-sensitive carbon black-filled silicone rubber-based sensors has advantages such as high sensitivity to stress, high stress resolution, simple circuit and low energy consumption.  相似文献   

7.
In this paper, we present a method that simplifies the interconnect complexity of N × M resistive sensor arrays from N × M to N + M. In this method, we propose to use two sets of interconnection lines in row–column fashion with all the sensor elements having one of their ends connected to a row line and other end to a column line. This interconnection overloading results in crosstalk among all the elements. This crosstalk causes the spreading of information over the whole array. The proposed circuit in this method takes care of this effect by minimizing the crosstalk. The circuit makes use of the concept of virtual same potential at the inputs of an operational amplifier in negative feedback to obtain a sufficient isolation among various elements. We theoretically present the suitability of the method for small/moderate sized sensor arrays and experimentally verify the predicted behavior by lock-in-amplifier based measurements on a light dependent resistor (LDR) in a 4 × 4 resistor array. Finally, we present a successful implementation of this method on a 16 × 16 imaging array of LDR.  相似文献   

8.
Human activities are inherently translation invariant and hierarchical. Human activity recognition (HAR), a field that has garnered a lot of attention in recent years due to its high demand in various application domains, makes use of time-series sensor data to infer activities. In this paper, a deep convolutional neural network (convnet) is proposed to perform efficient and effective HAR using smartphone sensors by exploiting the inherent characteristics of activities and 1D time-series signals, at the same time providing a way to automatically and data-adaptively extract robust features from raw data. Experiments show that convnets indeed derive relevant and more complex features with every additional layer, although difference of feature complexity level decreases with every additional layer. A wider time span of temporal local correlation can be exploited (1 × 9–1 × 14) and a low pooling size (1 × 2–1 × 3) is shown to be beneficial. Convnets also achieved an almost perfect classification on moving activities, especially very similar ones which were previously perceived to be very difficult to classify. Lastly, convnets outperform other state-of-the-art data mining techniques in HAR for the benchmark dataset collected from 30 volunteer subjects, achieving an overall performance of 94.79% on the test set with raw sensor data, and 95.75% with additional information of temporal fast Fourier transform of the HAR data set.  相似文献   

9.
《Parallel Computing》2014,40(5-6):144-158
One of the main difficulties using multi-point statistical (MPS) simulation based on annealing techniques or genetic algorithms concerns the excessive amount of time and memory that must be spent in order to achieve convergence. In this work we propose code optimizations and parallelization schemes over a genetic-based MPS code with the aim of speeding up the execution time. The code optimizations involve the reduction of cache misses in the array accesses, avoid branching instructions and increase the locality of the accessed data. The hybrid parallelization scheme involves a fine-grain parallelization of loops using a shared-memory programming model (OpenMP) and a coarse-grain distribution of load among several computational nodes using a distributed-memory programming model (MPI). Convergence, execution time and speed-up results are presented using 2D training images of sizes 100 × 100 × 1 and 1000 × 1000 × 1 on a distributed-shared memory supercomputing facility.  相似文献   

10.
This study consists of two cases: (i) The experimental analysis: Shot peening is a method to improve the resistance of metal pieces to fatigue by creating regions of residual stress. In this study, the residual stresses induced in steel specimen type C-1020 by applying various strengths of shot peening, are investigated using the electrochemical layer removal method. The best result is obtained using 0.26 mm A peening strength and the stress encountered in the shot peened material is ?276 MPa, while the maximum residual stress obtained is ?363 MPa at a peening strength of 0.43 mm A. (ii) The mathematical modelling analysis: The use of ANN has been proposed to determine the residual stresses based on various strengths of shot peening using results of experimental analysis. The back-propagation learning algorithm with two different variants and logistic sigmoid transfer function were used in the network. In order to train the neural network, limited experimental measurements were used as training and test data. The best fitting training data set was obtained with four neurons in the hidden layer, which made it possible to predict residual stress with accuracy at least as good as that of the experimental error, over the whole experimental range. After training, it was found the R2 values are 0.996112 and 0.99896 for annealed before peening and shot peened only, respectively. Similarly, these values for testing data are 0.995858 and 0.999143, respectively. As seen from the results of mathematical modelling, the calculated residual stresses are obviously within acceptable uncertainties.  相似文献   

11.
In this study, the Marshall Stability (MS) of asphalt concrete under varying temperature and exposure times was modelled by using fuzzy logic and statistical method. This is an experimental study conducted using statistics and fuzzy logic methods. In order to investigate the Marshall Stability of asphalt concrete based on exposure time and environment temperature, exposure times of 1.5, 3, 4.5 and 6 h and temperatures of 30, 40 and 50 °C were selected. The MS of the asphalt concrete at 17 °C (in laboratory environment temperature) was used as reference. The results showed that the MS of the asphalt core samples decreased 40.16% at 30 °C after 1.5 h and 62.39% after 6 h. At 40 °C the decrease was 74.31% after 1.5 h, and 78.10% after 6 h. At 50 °C the stability of the asphalt decreased 83.22% after 1.5 h, 88.66% after 6 h. The relationships between experimental results, fuzzy logic model and statistical results exhibited good correlation. The correlation coefficient was R = 0.99 for fuzzy logic model and R2 = 0.9 for statistical method. Based on the results of the study, it could be said that both the fuzzy logic method and statistical methods could be used for modelling of the stability of asphalt concrete under varying temperature and exposure time.  相似文献   

12.
The implicit Colebrook–White equation has been widely used to estimate the friction factor for turbulent fluid-flow in rough-pipes. In this paper, the state-of-the-art review for the most currently available explicit alternatives to the Colebrook–White equation, is presented. An extensive comparison test was established on the 20 × 500 grid, for a wide range of relative roughness (ε/D) and Reynolds number (R) values (1 × 10?6 ? ε/D ? 5 × 10?2; 4 × 103 ? R ? 108), covering a large portion of turbulent flow zone in Moody’s diagram. Based on the comprehensive error analysis, the magnitude points in which the maximum absolute and the maximum relative error are occurred at the pair of ε/D and R values, are observed. A limiting case of the most of these approximations provided friction factor estimates that are characterized by a mean absolute error of 5 × 10?4, a maximum absolute error of 4 × 10?3 whereas, a mean relative error of 1.3% and a maximum relative error of 5.8%, over the entire range of ε/D and R values, respectively. For practical purposes, the complete results for the maximum and the mean relative errors versus the 20 sets of ε/D value, are also indicated in two comparative figures. The examination results for error properties of these approximations gives one an opportunity to practically evaluate the most accurate formula among of all the previous explicit models; and showing in this way its great flexibility for estimating turbulent flow friction factor. Comparative analysis for the mean relative error profile revealed, the classification for the best-fitted six equations examined was in a good agreement with those of the best model selection criterion claimed in the recent literature, for all performed simulations.  相似文献   

13.
The most practical way to get spatially broad and continuous measurements of the surface temperature in the data-sparse cryosphere is by satellite remote sensing. The uncertainties in satellite-derived LSTs must be understood to develop internally-consistent decade-scale land surface temperature (LST) records needed for climate studies. In this work we assess satellite-derived “clear-sky” LST products from the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), and LSTs derived from the Enhanced Thematic Mapper Plus (ETM+) over snow and ice on Greenland. When possible, we compare satellite-derived LSTs with in-situ air temperature observations from Greenland Climate Network (GC-Net) automatic weather stations (AWS). We find that MODIS, ASTER and ETM+ provide reliable and consistent LSTs under clear-sky conditions and relatively-flat terrain over snow and ice targets over a range of temperatures from ? 40 to 0 °C. The satellite-derived LSTs agree within a relative RMS uncertainty of ~ 0.5 °C. The good agreement among the LSTs derived from the various satellite instruments is especially notable since different spectral channels and different retrieval algorithms are used to calculate LST from the raw satellite data. The AWS record in-situ data at a “point” while the satellite instruments record data over an area varying in size from: 57 × 57 m (ETM+), 90 × 90 m (ASTER), or to 1 × 1 km (MODIS). Surface topography and other factors contribute to variability of LST within a pixel, thus the AWS measurements may not be representative of the LST of the pixel. Without more information on the local spatial patterns of LST, the AWS LST cannot be considered valid ground truth for the satellite measurements, with RMS uncertainty ~ 2 °C. Despite the relatively large AWS-derived uncertainty, we find LST data are characterized by high accuracy but have uncertain absolute precision.  相似文献   

14.
A dataset of 237 human Ether-à-go-go Related Gene (hERG) potassium channel inhibitors (180 of which were used for model building and validation, whereas 57 constituted the “true” external prediction set) collected from 22 literature sources was modeled by 3D-SDAR. To produce reliable and reproducible classification models for hERG blocking, the initial set of 180 chemicals was split into two subsets: a balanced modeling set consisting of 118 compounds and an unbalanced validation set comprised of 62 compounds. A PLS bagging-like algorithm written in Matlab was used to process the data and assign each compound to one of the two (hERG+ or hERG-) activity classes. The best predictive model evaluated on the basis of a fully randomized hold-out test set (comprising 20% of the modeling set) used 4 latent variables and a grid of 6 ppm × 6 ppm × 1 Å in the C-C region, 6 ppm × 30 ppm × 1 Å in the C-N region, and 30 ppm × 30 ppm × 1 Å in the N-N region. An overall accuracy of 0.84 was obtained for both the hold-out test set and the validation set. Further, an external prediction set consisting of 57 drugs and drug derivatives was used to estimate the true predictive power of the reported 3D-SDAR model – a slight reduction of the overall accuracy down to 0.77 was observed. 3D-SDAR map of the most frequently occurring bins and their projection on the standard coordinate space of the chemical structures allowed identification of a three-center toxicophore composed of two aromatic rings and an amino group. A U test along the distance axis of the most frequently occurring 3D-SDAR bins was used to set the distance limits of the toxicophore. This toxicophore was found to be similar to an earlier reported phospholipidosis (PLD) toxicophore.  相似文献   

15.
The electrochemical sensor of triazole (TA) self-assembled monolayer (SAM) modified gold electrode (TA SAM/Au) was fabricated. The electrochemical behaviors of epinephrine (EP) at TA SAM/Au have been studied. The TA SAM/Au shows an excellent electrocatalytic activity for the oxidation of EP and accelerates electron transfer rate. The diffusion coefficient is 1.135 × 10−6 cm2 s−1. Under the optimum experiment conditions (i.e. 0.1 mol L−1, pH 4.4, sodium borate buffer, accumulation time: 180 s, accumulation potential: 0.6 V, scan rate: 0.1 Vs−1), the cathodic peak current of EP versus its concentration has a good linear relation in the ranges of 1.0 × 10−7 to 1.0 × 10−5 mol L−1 and 1.0 × 10−5 to 6.0 × 10−4 mol L−1 by square wave adsorptive stripping voltammetry (SWASV), with the correlation coefficient of 0.9985 and 0.9996, respectively. Detection limit is down to 1.0 × 10−8 mol L−1. The TA SAM/Au can be used for the determination of EP in practical injection. Meantime, the oxidative peak potentials of EP and ascorbic acid (AA) are well separated about 200 ± 10 mV at TA SAM/Au, the oxidation peak current increases approximately linearly with increasing concentration of both EP and AA in the concentration range of 2.0 × 10−5 to 1.6 × 10−4 mol L−1. It can be used for simultaneous determination of EP and AA.  相似文献   

16.
Reversible contrast mapping (RCM) and its various modified versions are used extensively in reversible watermarking (RW) to embed secret information into the digital contents. RCM based RW accomplishes a simple integer transform applied on pair of pixels and their least significant bits (LSB) are used for data embedding. It is perfectly invertible even if the LSBs of the transformed pixels are lost during data embedding. RCM offers high embedding rate at relatively low visual distortion (embedding distortion). Moreover, low computation cost and ease of hardware realization make it attractive for real-time implementation. To this aim, this paper proposes a field programmable gate array (FPGA) based very large scale integration (VLSI) architecture of RCM-RW algorithm for digital images that can serve the purpose of media authentication in real-time environment. Two architectures, one for block size (8 × 8) and the other one for (32 × 32) block are developed. The proposed architecture allows a 6-stage pipelining technique to speed up the circuit operation. For a cover image of block size (32 × 32), the proposed architecture requires 9881 slices, 9347 slice flip-flops, 11291 number 4-input LUTs, 3 BRAMs and a data rate of 1.0395 Mbps at an operating frequency as high as 98.76 MHz.  相似文献   

17.
We study the primary decomposition of lattice basis ideals. These ideals are binomial ideals with generators given by the elements of a basis of a saturated integer lattice. We show that the minimal primes of such an ideal are completely determined by the sign pattern of the basis elements, while the embedded primes are not. As a special case we examine the ideal generated by the 2  ×  2 adjacent minors of a generic m × n matrix. In particular, we determine all minimal primes in the 3  × n case. We also present faster ways of computing a generating set for the associated toric ideal from a lattice basis ideal.  相似文献   

18.
Ferroelectric properties of direct-patterned PZT(PbZr0.52Ti0.48O3) films with 460 μm × 460 μm size and 510 nm thick were analyzed for applying to micro-detecting devices. A photosensitive solution containing ortho-nitrobenzaldehyde was used for the preparation of direct-patterned PZT film. PZT solution was coated on Pt(1 1 1)/Ti/SiO2/Si(1 0 0) substrate for three times to obtain half-micron thick film and three times of direct-patterning process were repeated to define a pattern on multi-layer PZT film. Through intermediate and final anneal procedure of direct-patterned PZT film, any shrinkage along horizontal direction was not observed within this experimental condition, i.e., the size of the pattern was preserved after annealing, only a thickness reduction was observed after each annealing treatment. Ferroelectric properties of direct-patterned PZT film with 460 μm × 460 μm size and 510 nm thick were compared with those of un-patterned conventional PZT film and shown to be almost the same. Through this work, the high potentiality of direct-patternable PZT film for applying to micro-devices without the introduction of physical damages from dry-etching could be confirmed.  相似文献   

19.
Noise elimination is an important pre-processing step in magnetic resonance (MR) images for clinical purposes. In the present study, as an edge-preserving method, bilateral filter (BF) was used for Rician noise removal in MR images. The choice of BF parameters affects the performance of denoising. Therefore, as a novel approach, the parameters of BF were optimized using genetic algorithm (GA). First, the Rician noise with different variances (σ = 10, 20, 30) was added to simulated T1-weighted brain MR images. To find the optimum filter parameters, GA was applied to the noisy images in searching regions of window size [3 × 3, 5 × 5, 7 × 7, 11 × 11, and 21 × 21], spatial sigma [0.1–10] and intensity sigma [1–60]. The peak signal-to-noise ratio (PSNR) was adjusted as fitness value for optimization.After determination of optimal parameters, we investigated the results of proposed BF parameters with both the simulated and clinical MR images. In order to understand the importance of parameter selection in BF, we compared the results of denoising with proposed parameters and other previously used BFs using the quality metrics such as mean squared error (MSE), PSNR, signal-to-noise ratio (SNR) and structural similarity index metric (SSIM). The quality of the denoised images with the proposed parameters was validated using both visual inspection and quantitative metrics. The experimental results showed that the BF with parameters proposed by us showed a better performance than BF with other previously proposed parameters in both the preservation of edges and removal of different level of Rician noise from MR images. It can be concluded that the performance of BF for denoising is highly dependent on optimal parameter selection.  相似文献   

20.
In this study, low alloy steel substrates were borided by pack boriding process, for 2, 4 and 6 h at 900 °C. Microstructural observations were conducted by using SEM. The structural composition of layers consists of boron rich phase (FeB) and iron rich phase (Fe2B). First, experimental indentation studies were carried out to determine the load–unload curves of FeB layers at different peak loads. Important parameters such as hardness and Young’s modulus of FeB layers, and contact area were obtained from experimental indentation test sample data. After the mechanical characterization of samples, finite element modeling was applied to simulate the mechanical response of FeB layer on low alloy steel substrate by using ABAQUS software package program. The unique contribution of this study different from previous methods is the estimation of the yield strength of FeB layer by combining the experimental indentation works and finite element modeling (FEM).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号