首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8683篇
  免费   832篇
  国内免费   718篇
电工技术   196篇
技术理论   1篇
综合类   358篇
化学工业   210篇
金属工艺   651篇
机械仪表   1436篇
建筑科学   226篇
矿业工程   70篇
能源动力   143篇
轻工业   310篇
水利工程   30篇
石油天然气   34篇
武器工业   19篇
无线电   586篇
一般工业技术   311篇
冶金工业   91篇
原子能技术   14篇
自动化技术   5547篇
  2024年   16篇
  2023年   191篇
  2022年   321篇
  2021年   409篇
  2020年   306篇
  2019年   226篇
  2018年   215篇
  2017年   232篇
  2016年   290篇
  2015年   353篇
  2014年   498篇
  2013年   543篇
  2012年   617篇
  2011年   761篇
  2010年   492篇
  2009年   544篇
  2008年   509篇
  2007年   565篇
  2006年   507篇
  2005年   391篇
  2004年   318篇
  2003年   305篇
  2002年   287篇
  2001年   201篇
  2000年   174篇
  1999年   156篇
  1998年   153篇
  1997年   128篇
  1996年   104篇
  1995年   94篇
  1994年   53篇
  1993年   60篇
  1992年   44篇
  1991年   27篇
  1990年   28篇
  1989年   23篇
  1988年   14篇
  1987年   11篇
  1986年   7篇
  1985年   6篇
  1984年   6篇
  1983年   8篇
  1982年   9篇
  1981年   6篇
  1980年   4篇
  1979年   2篇
  1978年   5篇
  1977年   4篇
  1976年   5篇
  1973年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
1.
Recently, a number of classification techniques have been introduced. However, processing large dataset in a reasonable time has become a major challenge. This made classification task more complex and expensive in calculation. Thus, the need for solutions to overcome these constraints such as field programmable gate arrays (FPGAs). In this paper, we give an overview of the various classification techniques. Then, we present the existing FPGA based implementation of these classification methods. After that, we investigate the confronted challenges and the optimizations strategies. Finally, we highlight the hardware accelerator architectures and tools for hardware design suggested to improve the FPGA implementation of classification methods.  相似文献   
2.
Membrane electrode assembly (MEA) is considered a key component of a proton exchange membrane fuel cell (PEMFC). However, developing a new MEA to meet desired properties, such as operation under low-humidity conditions without a humidifier, is a time- and cost-consuming process. This study employs a machine-learning-based approach using K-nearest neighbor (KNN) and neural networks (NN) in the MEA development process by identifying a suitable catalyst layer (CL) recipe in MEA. Minimum redundancy maximum relevance and principal component analysis were implemented to specify the most important predictor and reduce the data dimension. The number of predictors was found to play an essential role in the accuracy of the KNN and NN models although the predictors have self-correlations. The KNN model with a K of 7 was found to minimize the model loss with a loss of 11.9%. The NN model constructed by three corresponding hidden layers with nine, eight, and nine nodes can achieve the lowest error of 0.1293 for the Pt catalyst and 0.031 for PVA as a good additive blending in the CL of the MEA. However, even if the error is low, the prediction of PVA seems to be inaccurate, regardless of the model structure. Therefore, the KNN model is more appropriate for CL recipe prediction.  相似文献   
3.
Having accurate information about the hydrogen solubility in hydrocarbon fuels and feedstocks is very important in petroleum refineries and coal processing plants. In the present work, extreme gradient boosting (XGBoost), multi-layer perceptron (MLP) trained with Levenberg–Marquardt (LM) algorithm, adaptive boosting support vector regression (AdaBoost?SVR), and a memory-efficient gradient boosting tree system on adaptive compact distributions (LiteMORT) as four novel machine learning methods were used for estimating the hydrogen solubility in hydrocarbon fuels. To achieve this goal, a database containing 445 experimental data of hydrogen solubilities in 17 various hydrocarbon fuels/feedstocks was collected in wide-spread ranges of operating pressures and temperatures. These hydrocarbon fuels include petroleum fractions, refinery products, coal liquids, bitumen, and shale oil. Input parameters of the models are temperature and pressure along with density at 20 °C, molecular weight, and weight percentage of carbon (C) and hydrogen (H) of hydrocarbon fuels. XGBoost showed the highest accuracy compared to the other models with an overall mean absolute percent relative error of 1.41% and coefficient of determination (R2) of 0.9998. Also, seven equations of state (EOSs) were used to predict hydrogen solubilities in hydrocarbon fuels. The 2- and 3-parameter Soave-Redlich-Kwong EOS rendered the best estimates for hydrogen solubilities among the EOSs. Moreover, sensitivity analysis indicated that pressure owns the highest influence on hydrogen solubilities in hydrocarbon fuels and then temperature and hydrogen weight percent of the hydrocarbon fuels are ranked, respectively. Finally, Leverage approach results exhibited that the XGBoost model could be well trusted to estimate the hydrogen solubility in hydrocarbon fuels.  相似文献   
4.
Machine learning algorithms have been widely used in mine fault diagnosis. The correct selection of the suitable algorithms is the key factor that affects the fault diagnosis. However, the impact of machine learning algorithms on the prediction performance of mine fault diagnosis models has not been fully evaluated. In this study, the windage alteration faults (WAFs) diagnosis models, which are based on K-nearest neighbor algorithm (KNN), multi-layer perceptron (MLP), support vector machine (SVM), and decision tree (DT), are constructed. Furthermore, the applicability of these four algorithms in the WAFs diagnosis is explored by a T-type ventilation network simulation experiment and the field empirical application research of Jinchuan No. 2 mine. The accuracy of the fault location diagnosis for the four models in both networks was 100%. In the simulation experiment, the mean absolute percentage error (MAPE) between the predicted values and the real values of the fault volume of the four models was 0.59%, 97.26%, 123.61%, and 8.78%, respectively. The MAPE for the field empirical application was 3.94%, 52.40%, 25.25%, and 7.15%, respectively. The results of the comprehensive evaluation of the fault location and fault volume diagnosis tests showed that the KNN model is the most suitable algorithm for the WAFs diagnosis, whereas the prediction performance of the DT model was the second-best. This study realizes the intelligent diagnosis of WAFs, and provides technical support for the realization of intelligent ventilation.  相似文献   
5.
Electrical energy is one of the key components for the development and sustainability of any nation. India is a developing country and blessed with a huge amount of renewable energy resources still there are various remote areas where the grid supply is rarely available. As electrical energy is the basic requirement, therefore it must be taken up on priority to exploit the available renewable energy resources integrated with storage devices like fuel cells and batteries for power generation and help the planners in providing the energy-efficient and alternative solution. This solution will not only meet electricity demand but also helps reduce greenhouse gas emissions as a result the efficient, sustainable and eco-friendly solution can be achieved which would contribute a lot to the smart grid environment. In this paper, a modified grey wolf optimizer approach is utilized to develop a hybrid microgrid based on available renewable energy resources considering modern power grid interactions. The proposed approach would be able to provide a robust and efficient microgrid that utilizes solar photovoltaic technology and wind energy conversion system. This approach integrates renewable resources with the meta-heuristic optimization algorithm for optimal dispatch of energy in grid-connected hybrid microgrid system. The proposed approach is mainly aimed to provide the optimal sizing of renewable energy-based microgrids based on the load profile according to time of use. To validate the proposed approach, a comparative study is also conducted through a case study and shows a significant savings of 30.88% and 49.99% of the rolling cost in comparison with fuzzy logic and mixed integer linear programming-based energy management system respectively.  相似文献   
6.
The evaluation of the volumetric accuracy of a machine tool is an open challenge in the industry, and a wide variety of technical solutions are available in the market and at research level. All solutions have advantages and disadvantages concerning which errors can be measured, the achievable uncertainty, the ease of implementation, possibility of machine integration and automation, the equipment cost and the machine occupation time, and it is not always straightforward which option to choose for each application. The need to ensure accuracy during the whole lifetime of the machine and the availability of monitoring systems developed following the Industry 4.0 trend are pushing the development of measurement systems that can be integrated in the machine to perform semi-automatic verification procedures that can be performed frequently by the machine user to monitor the condition of the machine. Calibrated artefact based calibration and verification solutions have an advantage in this field over laser based solutions in terms of cost and feasibility of machine integration, but they need to be optimized for each machine and customer requirements to achieve the required calibration uncertainty and minimize machine occupation time.This paper introduces a digital twin-based methodology to simulate all relevant effects in an artefact-based machine tool calibration procedure, from the machine itself with its expected error ranges, to the artefact geometry and uncertainty, artefact positions in the workspace, probe uncertainty, compensation model, etc. By parameterizing all relevant variables in the design of the calibration procedure, this simulation methodology can be used to analyse the effect of each design variable on the error mapping uncertainty, which is of great help in adapting the procedure to each specific machine and user requirements. The simulation methodology and the analysis possibilities are illustrated by applying it on a 3-axis milling machine tool.  相似文献   
7.
Engineering new glass compositions have experienced a sturdy tendency to move forward from (educated) trial-and-error to data- and simulation-driven strategies. In this work, we developed a computer program that combines data-driven predictive models (in this case, neural networks) with a genetic algorithm to design glass compositions with desired combinations of properties. First, we induced predictive models for the glass transition temperature (Tg) using a dataset of 45,302 compositions with 39 different chemical elements, and for the refractive index (nd) using a dataset of 41,225 compositions with 38 different chemical elements. Then, we searched for relevant glass compositions using a genetic algorithm informed by a design trend of glasses having high nd (1.7 or more) and low Tg (500 °C or less). Two candidate compositions suggested by the combined algorithms were selected and produced in the laboratory. These compositions are significantly different from those in the datasets used to induce the predictive models, showing that the used method is indeed capable of exploration. Both glasses met the constraints of the work, which supports the proposed framework. Therefore, this new tool can be immediately used for accelerating the design of new glasses. These results are a stepping stone in the pathway of machine learning-guided design of novel glasses.  相似文献   
8.
9.
Traditional Multiple Empirical Kernel Learning (MEKL) expands the expressions of the sample and brings better classification ability by using different empirical kernels to map the original data space into multiple kernel spaces. To make MEKL suit for the imbalanced problems, this paper introduces a weight matrix and a regularization term into MEKL. The weight matrix assigns high misclassification cost to the minority samples to balanced misclassification cost between minority and majority class. The regularization term named Majority Projection (MP) is used to make the classification hyperplane fit the distribution shape of majority samples and enlarge the between-class distance of minority and majority class. The contributions of this work are: (i) assigning high cost to minority samples to deal with imbalanced problems, (ii) introducing a new regularization term to concern the property of data distribution, (iii) and modifying the original PAC-Bayes bound to test the error upper bound of MEKL-MP. Through analyzing the experimental results, the proposed MEKL-MP is well suited to the imbalanced problems and has lower generalization risk in accordance with the value of PAC-Bayes bound.  相似文献   
10.
Xin-Na Geng  Danyu Bai 《工程优选》2019,51(8):1301-1323
This article addresses the no-wait flowshop scheduling problem with simultaneous consideration of common due date assignment, convex resource allocation and learning effect in a two machine setting. The processing time of each job can be controlled by its position in a sequence and also by allocating extra resource, which is a convex function of the amount of a common continuously divisible resource allocated to the job. The objective is to determine the optimal common due date, the resource allocation and the schedule of jobs such that the total earliness, tardiness and common due date cost (the total resource consumption cost) are minimized under the constraint condition that the total resource consumption cost (the total earliness, tardiness and common due date cost) is limited. Polynomial time algorithms are developed for two versions of the problem.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号