首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5895篇
  免费   721篇
  国内免费   583篇
电工技术   145篇
技术理论   1篇
综合类   186篇
化学工业   150篇
金属工艺   594篇
机械仪表   1366篇
建筑科学   138篇
矿业工程   62篇
能源动力   88篇
轻工业   297篇
水利工程   25篇
石油天然气   30篇
武器工业   19篇
无线电   417篇
一般工业技术   245篇
冶金工业   47篇
原子能技术   8篇
自动化技术   3381篇
  2024年   8篇
  2023年   153篇
  2022年   234篇
  2021年   279篇
  2020年   220篇
  2019年   163篇
  2018年   151篇
  2017年   170篇
  2016年   196篇
  2015年   260篇
  2014年   332篇
  2013年   366篇
  2012年   437篇
  2011年   527篇
  2010年   334篇
  2009年   362篇
  2008年   340篇
  2007年   402篇
  2006年   396篇
  2005年   300篇
  2004年   255篇
  2003年   222篇
  2002年   199篇
  2001年   142篇
  2000年   124篇
  1999年   110篇
  1998年   96篇
  1997年   89篇
  1996年   69篇
  1995年   64篇
  1994年   41篇
  1993年   40篇
  1992年   27篇
  1991年   17篇
  1990年   15篇
  1989年   15篇
  1988年   9篇
  1987年   3篇
  1986年   3篇
  1985年   4篇
  1984年   4篇
  1983年   4篇
  1982年   4篇
  1981年   4篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有7199条查询结果,搜索用时 15 毫秒
1.
Recently, a number of classification techniques have been introduced. However, processing large dataset in a reasonable time has become a major challenge. This made classification task more complex and expensive in calculation. Thus, the need for solutions to overcome these constraints such as field programmable gate arrays (FPGAs). In this paper, we give an overview of the various classification techniques. Then, we present the existing FPGA based implementation of these classification methods. After that, we investigate the confronted challenges and the optimizations strategies. Finally, we highlight the hardware accelerator architectures and tools for hardware design suggested to improve the FPGA implementation of classification methods.  相似文献   
2.
Membrane electrode assembly (MEA) is considered a key component of a proton exchange membrane fuel cell (PEMFC). However, developing a new MEA to meet desired properties, such as operation under low-humidity conditions without a humidifier, is a time- and cost-consuming process. This study employs a machine-learning-based approach using K-nearest neighbor (KNN) and neural networks (NN) in the MEA development process by identifying a suitable catalyst layer (CL) recipe in MEA. Minimum redundancy maximum relevance and principal component analysis were implemented to specify the most important predictor and reduce the data dimension. The number of predictors was found to play an essential role in the accuracy of the KNN and NN models although the predictors have self-correlations. The KNN model with a K of 7 was found to minimize the model loss with a loss of 11.9%. The NN model constructed by three corresponding hidden layers with nine, eight, and nine nodes can achieve the lowest error of 0.1293 for the Pt catalyst and 0.031 for PVA as a good additive blending in the CL of the MEA. However, even if the error is low, the prediction of PVA seems to be inaccurate, regardless of the model structure. Therefore, the KNN model is more appropriate for CL recipe prediction.  相似文献   
3.
Having accurate information about the hydrogen solubility in hydrocarbon fuels and feedstocks is very important in petroleum refineries and coal processing plants. In the present work, extreme gradient boosting (XGBoost), multi-layer perceptron (MLP) trained with Levenberg–Marquardt (LM) algorithm, adaptive boosting support vector regression (AdaBoost?SVR), and a memory-efficient gradient boosting tree system on adaptive compact distributions (LiteMORT) as four novel machine learning methods were used for estimating the hydrogen solubility in hydrocarbon fuels. To achieve this goal, a database containing 445 experimental data of hydrogen solubilities in 17 various hydrocarbon fuels/feedstocks was collected in wide-spread ranges of operating pressures and temperatures. These hydrocarbon fuels include petroleum fractions, refinery products, coal liquids, bitumen, and shale oil. Input parameters of the models are temperature and pressure along with density at 20 °C, molecular weight, and weight percentage of carbon (C) and hydrogen (H) of hydrocarbon fuels. XGBoost showed the highest accuracy compared to the other models with an overall mean absolute percent relative error of 1.41% and coefficient of determination (R2) of 0.9998. Also, seven equations of state (EOSs) were used to predict hydrogen solubilities in hydrocarbon fuels. The 2- and 3-parameter Soave-Redlich-Kwong EOS rendered the best estimates for hydrogen solubilities among the EOSs. Moreover, sensitivity analysis indicated that pressure owns the highest influence on hydrogen solubilities in hydrocarbon fuels and then temperature and hydrogen weight percent of the hydrocarbon fuels are ranked, respectively. Finally, Leverage approach results exhibited that the XGBoost model could be well trusted to estimate the hydrogen solubility in hydrocarbon fuels.  相似文献   
4.
Machine learning algorithms have been widely used in mine fault diagnosis. The correct selection of the suitable algorithms is the key factor that affects the fault diagnosis. However, the impact of machine learning algorithms on the prediction performance of mine fault diagnosis models has not been fully evaluated. In this study, the windage alteration faults (WAFs) diagnosis models, which are based on K-nearest neighbor algorithm (KNN), multi-layer perceptron (MLP), support vector machine (SVM), and decision tree (DT), are constructed. Furthermore, the applicability of these four algorithms in the WAFs diagnosis is explored by a T-type ventilation network simulation experiment and the field empirical application research of Jinchuan No. 2 mine. The accuracy of the fault location diagnosis for the four models in both networks was 100%. In the simulation experiment, the mean absolute percentage error (MAPE) between the predicted values and the real values of the fault volume of the four models was 0.59%, 97.26%, 123.61%, and 8.78%, respectively. The MAPE for the field empirical application was 3.94%, 52.40%, 25.25%, and 7.15%, respectively. The results of the comprehensive evaluation of the fault location and fault volume diagnosis tests showed that the KNN model is the most suitable algorithm for the WAFs diagnosis, whereas the prediction performance of the DT model was the second-best. This study realizes the intelligent diagnosis of WAFs, and provides technical support for the realization of intelligent ventilation.  相似文献   
5.
Electrical energy is one of the key components for the development and sustainability of any nation. India is a developing country and blessed with a huge amount of renewable energy resources still there are various remote areas where the grid supply is rarely available. As electrical energy is the basic requirement, therefore it must be taken up on priority to exploit the available renewable energy resources integrated with storage devices like fuel cells and batteries for power generation and help the planners in providing the energy-efficient and alternative solution. This solution will not only meet electricity demand but also helps reduce greenhouse gas emissions as a result the efficient, sustainable and eco-friendly solution can be achieved which would contribute a lot to the smart grid environment. In this paper, a modified grey wolf optimizer approach is utilized to develop a hybrid microgrid based on available renewable energy resources considering modern power grid interactions. The proposed approach would be able to provide a robust and efficient microgrid that utilizes solar photovoltaic technology and wind energy conversion system. This approach integrates renewable resources with the meta-heuristic optimization algorithm for optimal dispatch of energy in grid-connected hybrid microgrid system. The proposed approach is mainly aimed to provide the optimal sizing of renewable energy-based microgrids based on the load profile according to time of use. To validate the proposed approach, a comparative study is also conducted through a case study and shows a significant savings of 30.88% and 49.99% of the rolling cost in comparison with fuzzy logic and mixed integer linear programming-based energy management system respectively.  相似文献   
6.
The evaluation of the volumetric accuracy of a machine tool is an open challenge in the industry, and a wide variety of technical solutions are available in the market and at research level. All solutions have advantages and disadvantages concerning which errors can be measured, the achievable uncertainty, the ease of implementation, possibility of machine integration and automation, the equipment cost and the machine occupation time, and it is not always straightforward which option to choose for each application. The need to ensure accuracy during the whole lifetime of the machine and the availability of monitoring systems developed following the Industry 4.0 trend are pushing the development of measurement systems that can be integrated in the machine to perform semi-automatic verification procedures that can be performed frequently by the machine user to monitor the condition of the machine. Calibrated artefact based calibration and verification solutions have an advantage in this field over laser based solutions in terms of cost and feasibility of machine integration, but they need to be optimized for each machine and customer requirements to achieve the required calibration uncertainty and minimize machine occupation time.This paper introduces a digital twin-based methodology to simulate all relevant effects in an artefact-based machine tool calibration procedure, from the machine itself with its expected error ranges, to the artefact geometry and uncertainty, artefact positions in the workspace, probe uncertainty, compensation model, etc. By parameterizing all relevant variables in the design of the calibration procedure, this simulation methodology can be used to analyse the effect of each design variable on the error mapping uncertainty, which is of great help in adapting the procedure to each specific machine and user requirements. The simulation methodology and the analysis possibilities are illustrated by applying it on a 3-axis milling machine tool.  相似文献   
7.
Engineering new glass compositions have experienced a sturdy tendency to move forward from (educated) trial-and-error to data- and simulation-driven strategies. In this work, we developed a computer program that combines data-driven predictive models (in this case, neural networks) with a genetic algorithm to design glass compositions with desired combinations of properties. First, we induced predictive models for the glass transition temperature (Tg) using a dataset of 45,302 compositions with 39 different chemical elements, and for the refractive index (nd) using a dataset of 41,225 compositions with 38 different chemical elements. Then, we searched for relevant glass compositions using a genetic algorithm informed by a design trend of glasses having high nd (1.7 or more) and low Tg (500 °C or less). Two candidate compositions suggested by the combined algorithms were selected and produced in the laboratory. These compositions are significantly different from those in the datasets used to induce the predictive models, showing that the used method is indeed capable of exploration. Both glasses met the constraints of the work, which supports the proposed framework. Therefore, this new tool can be immediately used for accelerating the design of new glasses. These results are a stepping stone in the pathway of machine learning-guided design of novel glasses.  相似文献   
8.
9.
Traditional Multiple Empirical Kernel Learning (MEKL) expands the expressions of the sample and brings better classification ability by using different empirical kernels to map the original data space into multiple kernel spaces. To make MEKL suit for the imbalanced problems, this paper introduces a weight matrix and a regularization term into MEKL. The weight matrix assigns high misclassification cost to the minority samples to balanced misclassification cost between minority and majority class. The regularization term named Majority Projection (MP) is used to make the classification hyperplane fit the distribution shape of majority samples and enlarge the between-class distance of minority and majority class. The contributions of this work are: (i) assigning high cost to minority samples to deal with imbalanced problems, (ii) introducing a new regularization term to concern the property of data distribution, (iii) and modifying the original PAC-Bayes bound to test the error upper bound of MEKL-MP. Through analyzing the experimental results, the proposed MEKL-MP is well suited to the imbalanced problems and has lower generalization risk in accordance with the value of PAC-Bayes bound.  相似文献   
10.
为了提高花粉浓度预报的准确率,解决现有花粉浓度预报准确率不高的问题,提出了一种基于粒子群优化(PSO)算法和支持向量机(SVM)的花粉浓度预报模型。首先,综合考虑气温、气温日较差、相对湿度、降水量、风力、日照时数等多种气象要素,选择与花粉浓度相关性较强的气象要素构成特征向量;其次,利用特征向量与花粉浓度数据建立SVM预测模型,并使用PSO算法找出最优参数;然后利用最优参数优化花粉浓度预测模型;最后,使用优化后的模型对花粉未来24 h浓度进行预测,并与未优化的SVM、多元线性回归法(MLR)、反向神经网络(BPNN)作对比。此外使用优化后的模型对某市南郊观象台和密云两个站点进行逐日花粉浓度预测。实验结果表明,相比其他预报方法,所提方法能有效提高花粉浓度未来24 h预测精度,并具有较高的泛化能力。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号