首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19376篇
  免费   2839篇
  国内免费   1831篇
电工技术   1544篇
综合类   2701篇
化学工业   1451篇
金属工艺   495篇
机械仪表   1061篇
建筑科学   1228篇
矿业工程   534篇
能源动力   523篇
轻工业   1281篇
水利工程   756篇
石油天然气   768篇
武器工业   323篇
无线电   2784篇
一般工业技术   2002篇
冶金工业   618篇
原子能技术   112篇
自动化技术   5865篇
  2024年   68篇
  2023年   344篇
  2022年   638篇
  2021年   809篇
  2020年   829篇
  2019年   774篇
  2018年   693篇
  2017年   806篇
  2016年   919篇
  2015年   1005篇
  2014年   1249篇
  2013年   1502篇
  2012年   1470篇
  2011年   1607篇
  2010年   1150篇
  2009年   1134篇
  2008年   1197篇
  2007年   1352篇
  2006年   1099篇
  2005年   898篇
  2004年   727篇
  2003年   634篇
  2002年   527篇
  2001年   483篇
  2000年   373篇
  1999年   309篇
  1998年   244篇
  1997年   221篇
  1996年   176篇
  1995年   162篇
  1994年   110篇
  1993年   105篇
  1992年   76篇
  1991年   71篇
  1990年   65篇
  1989年   44篇
  1988年   30篇
  1987年   24篇
  1986年   21篇
  1985年   13篇
  1984年   15篇
  1983年   10篇
  1982年   7篇
  1981年   8篇
  1980年   11篇
  1976年   3篇
  1966年   3篇
  1963年   3篇
  1961年   4篇
  1959年   6篇
排序方式: 共有10000条查询结果,搜索用时 265 毫秒
1.
This paper presents a novel No-Reference Video Quality Assessment (NR-VQA) model that utilizes proposed 3D steerable wavelet transform-based Natural Video Statistics (NVS) features as well as human perceptual features. Additionally, we proposed a novel two-stage regression scheme that significantly improves the overall performance of quality estimation. In the first stage, transform-based NVS and human perceptual features are separately passed through the proposed hybrid regression scheme: Support Vector Regression (SVR) followed by Polynomial curve fitting. The two visual quality scores predicted from the first stage are then used as features for the similar second stage. This predicts the final quality scores of distorted videos by achieving score level fusion. Extensive experiments were conducted using five authentic and four synthetic distortion databases. Experimental results demonstrate that the proposed method outperforms other published state-of-the-art benchmark methods on synthetic distortion databases and is among the top performers on authentic distortion databases. The source code is available at https://github.com/anishVNIT/two-stage-vqa.  相似文献   
2.
In recent building practice, rapid construction is one of the principal requisites. Furthermore, in designing concrete structures, compressive strength is the most significant of all parameters. While 3-d and 7-d compressive strength reflects the strengths at early phases, the ultimate strength is paramount. An effort has been made in this study to develop mathematical models for predicting compressive strength of concrete incorporating ethylene vinyl acetate (EVA) at the later phases. Kolmogorov-Smirnov (KS) goodness-of-fit test was used to examine distribution of the data. The compressive strength of EVA-modified concrete was studied by incorporating various concentrations of EVA as an admixture and by testing at ages of 28, 56, 90, 120, 210, and 365 d. An accelerated compressive strength at 3.5 hours was considered as a reference strength on the basis of which all the specified strengths were predicted by means of linear regression fit. Based on the results of KS goodness-of-fit test, it was concluded that KS test statistics value (D) in each case was lower than the critical value 0.521 for a significance level of 0.05, which demonstrated that the data was normally distributed. Based on the results of compressive strength test, it was concluded that the strength of EVA-modified specimens increased at all ages and the optimum dosage of EVA was achieved at 16% concentration. Furthermore, it was concluded that predicted compressive strength values lies within a 6% difference from the actual strength values for all the mixes, which indicates the practicability of the regression equations. This research work may help in understanding the role of EVA as a viable material in polymer-based cement composites.  相似文献   
3.
《Journal of dairy science》2022,105(5):4314-4323
We tested the hypothesis that the size of a beef cattle population destined for use on dairy females is smaller under optimum-contribution selection (OCS) than under truncation selection (TRS) at the same genetic gain (ΔG) and the same rate of inbreeding (ΔF). We used stochastic simulation to estimate true ΔG realized at a 0.005 ΔF in breeding schemes with OCS or TRS. The schemes for the beef cattle population also differed in the number of purebred offspring per dam and the total number of purebred offspring per generation. Dams of the next generation were exclusively selected among the one-year-old heifers. All dams were donors for embryo transfer and produced a maximum of 5 or 10 offspring. The total number of purebred offspring per generation was: 400, 800, 1,600 or 4,000 calves, and it was used as a measure of population size. Rate of inbreeding was predicted and controlled using pedigree relationships. Each OCS (TRS) scheme was simulated for 10 discrete generations and replicated 100 (200) times. The OCS scheme and the TRS scheme with a maximum of 10 offspring per dam required approximately 783 and 1,257 purebred offspring per generation to realize a true ΔG of €14 and a ΔF of 0.005 per generation. Schemes with a maximum of 5 offspring per dam required more purebred offspring per generation to realize a similar true ΔG and a similar ΔF. Our results show that OCS and multiple ovulation and embryo transfer act on selection intensity through different mechanisms to achieve fewer selection candidates and fewer selected sires and dams than under TRS at the same ΔG and a fixed ΔF. Therefore, we advocate the use of a breeding scheme with OCS and multiple ovulation and embryo transfer for beef cattle destined for use on dairy females because it is favorable both from an economic perspective and a carbon footprint perspective.  相似文献   
4.
Reliable prediction of flooding conditions is needed for sizing and operating packed extraction columns. Due to the complex interplay of physicochemical properties, operational parameters and the packing-specific properties, it is challenging to develop accurate semi-empirical or rigorous models with a high validity range. State of the art models may therefore fail to predict flooding accurately. To overcome this problem, a data-driven model based on Gaussian processes is developed to predict flooding for packed liquid-liquid and high-pressure extraction columns. The optimized Gaussian process for the liquid-liquid extraction column results in an average absolute relative error (AARE) of 15.23 %, whereas the algorithm for the high-pressure extraction column results in an AARE of 13.68 %. Both algorithms can predict flooding curves for different packing geometries and chemical systems precisely.  相似文献   
5.
Prediction of mode I fracture toughness (KIC) of rock is of significant importance in rock engineering analyses. In this study, linear multiple regression (LMR) and gene expression programming (GEP) methods were used to provide a reliable relationship to determine mode I fracture toughness of rock. The presented model was developed based on 60 datasets taken from the previous literature. To predict fracture parameters, three mechanical parameters of rock mass including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), and elastic modulus (E) have been selected as the input parameters. A cluster of data was collected and divided into two random groups of training and testing datasets. Then, different statistical linear and artificial intelligence based nonlinear analyses were conducted on the training data to provide a reliable prediction model of KIC. These two predictive methods were then evaluated based on the testing data. To evaluate the efficiency of the proposed models for predicting the mode I fracture toughness of rock, various statistical indices including coefficient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE) were utilized herein. In the case of testing datasets, the values of R2, RMSE, and MAE for the GEP model were 0.87, 0.188, and 0.156, respectively, while they were 0.74, 0.473, and 0.223, respectively, for the LMR model. The results indicated that the selected GEP model delivered superior performance with a higher R2 value and lower errors.  相似文献   
6.
Abstract

Data mining techniques have been successfully utilized in different applications of significant fields, including medical research. With the wealth of data available within the health-care systems, there is a lack of practical analysis tools to discover hidden relationships and trends in data. The complexity of medical data that is unfavorable for most models is a considerable challenge in prediction. The ability of a model to perform accurately and efficiently in disease diagnosis is extremely significant. Thus, the model must be selected to fit the data better, such that the learning from previous data is most efficient, and the diagnosis of the disease is highly accurate. This work is motivated by the limited number of regression analysis tools for multivariate counts in the literature. We propose two regression models for count data based on flexible distributions, namely, the multinomial Beta-Liouville and multinomial scaled Dirichlet, and evaluated the proposed models in the problem of disease diagnosis. The performance is evaluated based on the accuracy of the prediction which depends on the nature and complexity of the dataset. Our results show the efficiency of the two proposed regression models where the prediction performance of both models is competitive to other previously used regression models for count data and to the best results in the literature.  相似文献   
7.
The existing analytical average bit error rate (ABER) expression of conventional generalised spatial modulation (CGSM) does not agree well with the Monte Carlo simulation results in the low signal‐to‐noise ratio (SNR) region. Hence, the first contribution of this paper is to derive a new and easy way to evaluate analytical ABER expression that improves the validation of the simulation results at low SNRs. Secondly, a novel system termed CGSM with enhanced spectral efficiency (CGSM‐ESE) is presented. This system is realised by applying a rotation angle to one of the two active transmit antennas. As a result, the overall spectral efficiency is increased by 1 bit/s/Hz when compared with the equivalent CGSM system. In order to validate the simulation results of CGSM‐ESE, the third contribution is to derive an analytical ABER expression. Finally, to improve the ABER performance of CGSM‐ESE, three link adaptation algorithms are developed. By assuming full knowledge of the channel at the receiver, the proposed algorithms select a subset of channel gain vector (CGV) pairs based on the Euclidean distance between all CGV pairs, CGV splitting, CGV amplitudes, or a combination of these.  相似文献   
8.
将强跟踪思想引入容积卡尔曼滤波(cubature Kalman filter,CKF),建立强跟踪CKF能有效克服CKF在模型不确定、状态突变等情况下,滤波性能下降的问题。通过分析现有多渐消因子计算方法,发现它们均只利用了协方差矩阵的对角线元素,并没有考虑各个状态之间的相关性,不能充分发挥多渐消因子的优势。为此,本文提出渐消因子矩阵,基于正交原理推导渐消因子矩阵的求解方法,提出多渐消因子强跟踪CKF算法。多渐消因子强跟踪CKF算法突破了传统多渐消因子为向量的限制,也不再要求渐消因子取值要大于1。仿真验证了算法具有更好的滤波精度何鲁棒性,能更好的满足工程应用的要求。  相似文献   
9.
轮对在列车走行过程中起着导向、承受以及传递载荷的作用,其踏面及轮缘磨耗对地铁列车运行安全性和钢轨的寿命都将产生重要影响。根据地铁列车车轮磨耗机理,分析车轮尺寸数据特点,针对轮缘厚度这一型面参数,基于梯度提升决策树算法构建轮缘厚度磨耗预测模型。在该模型的基础上,任意选取某轮对数据进行验证分析,结果表明:基于梯度提升决策树的轮对磨耗预测模型具有较好的预测精度,可预测出1~6个月的轮缘厚度变化趋势范围,预测时间范围较长,可为地铁维保部门对轮对的维修方式由状态修转为预防修提供指导性建议。  相似文献   
10.
Main challenges for developing data-based models lie in the existence of high-dimensional and possibly missing observations that exist in stored data from industry process. Variational autoencoder (VAE) as one of the deep learning methods has been applied for extracting useful information or features from high-dimensional dataset. Considering that existing VAE is unsupervised, an output-relevant VAE is proposed for extracting output-relevant features in this work. By using correlation between process variables, different weight is correspondingly assigned to each input variable. With symmetric Kullback–Leibler (SKL) divergence, the similarity is evaluated between the stored samples and a query sample. According to the values of the SKL divergence, data relevant for modeling are selected. Subsequently, Gaussian process regression (GPR) is utilized to establish a model between the input and the corresponding output at the query sample. In addition, owing to the common existence of missing data in output data set, the parameters and missing data in the GPR are estimated simultaneously. A practical debutanizer industrial process is utilized to illustrate the effectiveness of the proposed method.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号