首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9243篇
  免费   837篇
  国内免费   738篇
电工技术   214篇
技术理论   1篇
综合类   375篇
化学工业   234篇
金属工艺   676篇
机械仪表   1502篇
建筑科学   299篇
矿业工程   78篇
能源动力   167篇
轻工业   318篇
水利工程   42篇
石油天然气   42篇
武器工业   19篇
无线电   616篇
一般工业技术   400篇
冶金工业   94篇
原子能技术   20篇
自动化技术   5721篇
  2024年   17篇
  2023年   199篇
  2022年   333篇
  2021年   425篇
  2020年   325篇
  2019年   242篇
  2018年   227篇
  2017年   263篇
  2016年   315篇
  2015年   371篇
  2014年   532篇
  2013年   587篇
  2012年   631篇
  2011年   820篇
  2010年   507篇
  2009年   582篇
  2008年   544篇
  2007年   594篇
  2006年   537篇
  2005年   407篇
  2004年   337篇
  2003年   324篇
  2002年   297篇
  2001年   217篇
  2000年   181篇
  1999年   161篇
  1998年   165篇
  1997年   130篇
  1996年   111篇
  1995年   100篇
  1994年   55篇
  1993年   63篇
  1992年   46篇
  1991年   27篇
  1990年   28篇
  1989年   24篇
  1988年   16篇
  1987年   11篇
  1986年   7篇
  1985年   6篇
  1984年   6篇
  1983年   8篇
  1982年   9篇
  1981年   6篇
  1980年   4篇
  1979年   2篇
  1978年   5篇
  1977年   4篇
  1976年   5篇
  1973年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
1.
Recently, a number of classification techniques have been introduced. However, processing large dataset in a reasonable time has become a major challenge. This made classification task more complex and expensive in calculation. Thus, the need for solutions to overcome these constraints such as field programmable gate arrays (FPGAs). In this paper, we give an overview of the various classification techniques. Then, we present the existing FPGA based implementation of these classification methods. After that, we investigate the confronted challenges and the optimizations strategies. Finally, we highlight the hardware accelerator architectures and tools for hardware design suggested to improve the FPGA implementation of classification methods.  相似文献   
2.
Membrane electrode assembly (MEA) is considered a key component of a proton exchange membrane fuel cell (PEMFC). However, developing a new MEA to meet desired properties, such as operation under low-humidity conditions without a humidifier, is a time- and cost-consuming process. This study employs a machine-learning-based approach using K-nearest neighbor (KNN) and neural networks (NN) in the MEA development process by identifying a suitable catalyst layer (CL) recipe in MEA. Minimum redundancy maximum relevance and principal component analysis were implemented to specify the most important predictor and reduce the data dimension. The number of predictors was found to play an essential role in the accuracy of the KNN and NN models although the predictors have self-correlations. The KNN model with a K of 7 was found to minimize the model loss with a loss of 11.9%. The NN model constructed by three corresponding hidden layers with nine, eight, and nine nodes can achieve the lowest error of 0.1293 for the Pt catalyst and 0.031 for PVA as a good additive blending in the CL of the MEA. However, even if the error is low, the prediction of PVA seems to be inaccurate, regardless of the model structure. Therefore, the KNN model is more appropriate for CL recipe prediction.  相似文献   
3.
Having accurate information about the hydrogen solubility in hydrocarbon fuels and feedstocks is very important in petroleum refineries and coal processing plants. In the present work, extreme gradient boosting (XGBoost), multi-layer perceptron (MLP) trained with Levenberg–Marquardt (LM) algorithm, adaptive boosting support vector regression (AdaBoost?SVR), and a memory-efficient gradient boosting tree system on adaptive compact distributions (LiteMORT) as four novel machine learning methods were used for estimating the hydrogen solubility in hydrocarbon fuels. To achieve this goal, a database containing 445 experimental data of hydrogen solubilities in 17 various hydrocarbon fuels/feedstocks was collected in wide-spread ranges of operating pressures and temperatures. These hydrocarbon fuels include petroleum fractions, refinery products, coal liquids, bitumen, and shale oil. Input parameters of the models are temperature and pressure along with density at 20 °C, molecular weight, and weight percentage of carbon (C) and hydrogen (H) of hydrocarbon fuels. XGBoost showed the highest accuracy compared to the other models with an overall mean absolute percent relative error of 1.41% and coefficient of determination (R2) of 0.9998. Also, seven equations of state (EOSs) were used to predict hydrogen solubilities in hydrocarbon fuels. The 2- and 3-parameter Soave-Redlich-Kwong EOS rendered the best estimates for hydrogen solubilities among the EOSs. Moreover, sensitivity analysis indicated that pressure owns the highest influence on hydrogen solubilities in hydrocarbon fuels and then temperature and hydrogen weight percent of the hydrocarbon fuels are ranked, respectively. Finally, Leverage approach results exhibited that the XGBoost model could be well trusted to estimate the hydrogen solubility in hydrocarbon fuels.  相似文献   
4.
Machine learning algorithms have been widely used in mine fault diagnosis. The correct selection of the suitable algorithms is the key factor that affects the fault diagnosis. However, the impact of machine learning algorithms on the prediction performance of mine fault diagnosis models has not been fully evaluated. In this study, the windage alteration faults (WAFs) diagnosis models, which are based on K-nearest neighbor algorithm (KNN), multi-layer perceptron (MLP), support vector machine (SVM), and decision tree (DT), are constructed. Furthermore, the applicability of these four algorithms in the WAFs diagnosis is explored by a T-type ventilation network simulation experiment and the field empirical application research of Jinchuan No. 2 mine. The accuracy of the fault location diagnosis for the four models in both networks was 100%. In the simulation experiment, the mean absolute percentage error (MAPE) between the predicted values and the real values of the fault volume of the four models was 0.59%, 97.26%, 123.61%, and 8.78%, respectively. The MAPE for the field empirical application was 3.94%, 52.40%, 25.25%, and 7.15%, respectively. The results of the comprehensive evaluation of the fault location and fault volume diagnosis tests showed that the KNN model is the most suitable algorithm for the WAFs diagnosis, whereas the prediction performance of the DT model was the second-best. This study realizes the intelligent diagnosis of WAFs, and provides technical support for the realization of intelligent ventilation.  相似文献   
5.
Electrical energy is one of the key components for the development and sustainability of any nation. India is a developing country and blessed with a huge amount of renewable energy resources still there are various remote areas where the grid supply is rarely available. As electrical energy is the basic requirement, therefore it must be taken up on priority to exploit the available renewable energy resources integrated with storage devices like fuel cells and batteries for power generation and help the planners in providing the energy-efficient and alternative solution. This solution will not only meet electricity demand but also helps reduce greenhouse gas emissions as a result the efficient, sustainable and eco-friendly solution can be achieved which would contribute a lot to the smart grid environment. In this paper, a modified grey wolf optimizer approach is utilized to develop a hybrid microgrid based on available renewable energy resources considering modern power grid interactions. The proposed approach would be able to provide a robust and efficient microgrid that utilizes solar photovoltaic technology and wind energy conversion system. This approach integrates renewable resources with the meta-heuristic optimization algorithm for optimal dispatch of energy in grid-connected hybrid microgrid system. The proposed approach is mainly aimed to provide the optimal sizing of renewable energy-based microgrids based on the load profile according to time of use. To validate the proposed approach, a comparative study is also conducted through a case study and shows a significant savings of 30.88% and 49.99% of the rolling cost in comparison with fuzzy logic and mixed integer linear programming-based energy management system respectively.  相似文献   
6.
The evaluation of the volumetric accuracy of a machine tool is an open challenge in the industry, and a wide variety of technical solutions are available in the market and at research level. All solutions have advantages and disadvantages concerning which errors can be measured, the achievable uncertainty, the ease of implementation, possibility of machine integration and automation, the equipment cost and the machine occupation time, and it is not always straightforward which option to choose for each application. The need to ensure accuracy during the whole lifetime of the machine and the availability of monitoring systems developed following the Industry 4.0 trend are pushing the development of measurement systems that can be integrated in the machine to perform semi-automatic verification procedures that can be performed frequently by the machine user to monitor the condition of the machine. Calibrated artefact based calibration and verification solutions have an advantage in this field over laser based solutions in terms of cost and feasibility of machine integration, but they need to be optimized for each machine and customer requirements to achieve the required calibration uncertainty and minimize machine occupation time.This paper introduces a digital twin-based methodology to simulate all relevant effects in an artefact-based machine tool calibration procedure, from the machine itself with its expected error ranges, to the artefact geometry and uncertainty, artefact positions in the workspace, probe uncertainty, compensation model, etc. By parameterizing all relevant variables in the design of the calibration procedure, this simulation methodology can be used to analyse the effect of each design variable on the error mapping uncertainty, which is of great help in adapting the procedure to each specific machine and user requirements. The simulation methodology and the analysis possibilities are illustrated by applying it on a 3-axis milling machine tool.  相似文献   
7.
Engineering new glass compositions have experienced a sturdy tendency to move forward from (educated) trial-and-error to data- and simulation-driven strategies. In this work, we developed a computer program that combines data-driven predictive models (in this case, neural networks) with a genetic algorithm to design glass compositions with desired combinations of properties. First, we induced predictive models for the glass transition temperature (Tg) using a dataset of 45,302 compositions with 39 different chemical elements, and for the refractive index (nd) using a dataset of 41,225 compositions with 38 different chemical elements. Then, we searched for relevant glass compositions using a genetic algorithm informed by a design trend of glasses having high nd (1.7 or more) and low Tg (500 °C or less). Two candidate compositions suggested by the combined algorithms were selected and produced in the laboratory. These compositions are significantly different from those in the datasets used to induce the predictive models, showing that the used method is indeed capable of exploration. Both glasses met the constraints of the work, which supports the proposed framework. Therefore, this new tool can be immediately used for accelerating the design of new glasses. These results are a stepping stone in the pathway of machine learning-guided design of novel glasses.  相似文献   
8.
As sentinels of climate change and other anthropogenic forces, freshwater lakes are experiencing ecosystem disruptions at every level of the food web, beginning with the phytoplankton, a highly responsive group of organisms. Most studies regarding the effects of climate change on phytoplankton focus on a potential scenario in which temperatures continuously increase and droughts intersperse heavy precipitation events. Like much of the conterminous United States in 2019, the Muskegon River watershed (Michigan, USA) experienced record-breaking rainfall accompanied by unusually cool temperatures, affording an opportunity to explore how an alternate potential climate scenario may affect phytoplankton. We conducted biweekly sampling of environmental variables and phytoplankton in Muskegon Lake, a Great Lakes Area of Concern that connects to Lake Michigan. We compared environmental variables in 2019 to the previous eight years using long-term data from the Muskegon Lake Observatory buoy, and annual monitoring excursions provided historical phytoplankton data. Under cold and wet conditions, diatoms were the single dominant division throughout the entire growth season – an unprecedented scenario in Muskegon Lake. In 10 of the 13 biweekly sampling days in 2019, diatoms comprised over 75% of the phytoplankton community in the lake by count, indicating that the spring diatom bloom persisted through the fall. Additionally, phytoplankton seasonal succession and abundance patterns typically seen in this lake were absent. In a world experiencing reduced predictability, increased variability, and regional climate anomalies, studying periods of extreme weather events may offer insight into how natural systems will be affected and respond under future climate scenarios.  相似文献   
9.
10.
Traditional Multiple Empirical Kernel Learning (MEKL) expands the expressions of the sample and brings better classification ability by using different empirical kernels to map the original data space into multiple kernel spaces. To make MEKL suit for the imbalanced problems, this paper introduces a weight matrix and a regularization term into MEKL. The weight matrix assigns high misclassification cost to the minority samples to balanced misclassification cost between minority and majority class. The regularization term named Majority Projection (MP) is used to make the classification hyperplane fit the distribution shape of majority samples and enlarge the between-class distance of minority and majority class. The contributions of this work are: (i) assigning high cost to minority samples to deal with imbalanced problems, (ii) introducing a new regularization term to concern the property of data distribution, (iii) and modifying the original PAC-Bayes bound to test the error upper bound of MEKL-MP. Through analyzing the experimental results, the proposed MEKL-MP is well suited to the imbalanced problems and has lower generalization risk in accordance with the value of PAC-Bayes bound.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号