首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   13215篇
  免费   980篇
  国内免费   840篇
电工技术   349篇
技术理论   1篇
综合类   416篇
化学工业   231篇
金属工艺   993篇
机械仪表   2705篇
建筑科学   362篇
矿业工程   181篇
能源动力   163篇
轻工业   338篇
水利工程   42篇
石油天然气   51篇
武器工业   119篇
无线电   968篇
一般工业技术   567篇
冶金工业   76篇
原子能技术   49篇
自动化技术   7424篇
  2024年   12篇
  2023年   195篇
  2022年   297篇
  2021年   379篇
  2020年   327篇
  2019年   233篇
  2018年   228篇
  2017年   287篇
  2016年   355篇
  2015年   440篇
  2014年   665篇
  2013年   657篇
  2012年   820篇
  2011年   1025篇
  2010年   655篇
  2009年   755篇
  2008年   780篇
  2007年   1000篇
  2006年   1011篇
  2005年   903篇
  2004年   788篇
  2003年   700篇
  2002年   596篇
  2001年   445篇
  2000年   323篇
  1999年   306篇
  1998年   208篇
  1997年   142篇
  1996年   111篇
  1995年   89篇
  1994年   59篇
  1993年   58篇
  1992年   34篇
  1991年   23篇
  1990年   20篇
  1989年   20篇
  1988年   12篇
  1987年   6篇
  1986年   6篇
  1985年   9篇
  1984年   11篇
  1983年   9篇
  1982年   8篇
  1981年   6篇
  1979年   3篇
  1977年   5篇
  1976年   4篇
  1975年   2篇
  1974年   2篇
  1973年   3篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
1.
Machine learning algorithms have been widely used in mine fault diagnosis. The correct selection of the suitable algorithms is the key factor that affects the fault diagnosis. However, the impact of machine learning algorithms on the prediction performance of mine fault diagnosis models has not been fully evaluated. In this study, the windage alteration faults (WAFs) diagnosis models, which are based on K-nearest neighbor algorithm (KNN), multi-layer perceptron (MLP), support vector machine (SVM), and decision tree (DT), are constructed. Furthermore, the applicability of these four algorithms in the WAFs diagnosis is explored by a T-type ventilation network simulation experiment and the field empirical application research of Jinchuan No. 2 mine. The accuracy of the fault location diagnosis for the four models in both networks was 100%. In the simulation experiment, the mean absolute percentage error (MAPE) between the predicted values and the real values of the fault volume of the four models was 0.59%, 97.26%, 123.61%, and 8.78%, respectively. The MAPE for the field empirical application was 3.94%, 52.40%, 25.25%, and 7.15%, respectively. The results of the comprehensive evaluation of the fault location and fault volume diagnosis tests showed that the KNN model is the most suitable algorithm for the WAFs diagnosis, whereas the prediction performance of the DT model was the second-best. This study realizes the intelligent diagnosis of WAFs, and provides technical support for the realization of intelligent ventilation.  相似文献   
2.
The evaluation of the volumetric accuracy of a machine tool is an open challenge in the industry, and a wide variety of technical solutions are available in the market and at research level. All solutions have advantages and disadvantages concerning which errors can be measured, the achievable uncertainty, the ease of implementation, possibility of machine integration and automation, the equipment cost and the machine occupation time, and it is not always straightforward which option to choose for each application. The need to ensure accuracy during the whole lifetime of the machine and the availability of monitoring systems developed following the Industry 4.0 trend are pushing the development of measurement systems that can be integrated in the machine to perform semi-automatic verification procedures that can be performed frequently by the machine user to monitor the condition of the machine. Calibrated artefact based calibration and verification solutions have an advantage in this field over laser based solutions in terms of cost and feasibility of machine integration, but they need to be optimized for each machine and customer requirements to achieve the required calibration uncertainty and minimize machine occupation time.This paper introduces a digital twin-based methodology to simulate all relevant effects in an artefact-based machine tool calibration procedure, from the machine itself with its expected error ranges, to the artefact geometry and uncertainty, artefact positions in the workspace, probe uncertainty, compensation model, etc. By parameterizing all relevant variables in the design of the calibration procedure, this simulation methodology can be used to analyse the effect of each design variable on the error mapping uncertainty, which is of great help in adapting the procedure to each specific machine and user requirements. The simulation methodology and the analysis possibilities are illustrated by applying it on a 3-axis milling machine tool.  相似文献   
3.
Engineering new glass compositions have experienced a sturdy tendency to move forward from (educated) trial-and-error to data- and simulation-driven strategies. In this work, we developed a computer program that combines data-driven predictive models (in this case, neural networks) with a genetic algorithm to design glass compositions with desired combinations of properties. First, we induced predictive models for the glass transition temperature (Tg) using a dataset of 45,302 compositions with 39 different chemical elements, and for the refractive index (nd) using a dataset of 41,225 compositions with 38 different chemical elements. Then, we searched for relevant glass compositions using a genetic algorithm informed by a design trend of glasses having high nd (1.7 or more) and low Tg (500 °C or less). Two candidate compositions suggested by the combined algorithms were selected and produced in the laboratory. These compositions are significantly different from those in the datasets used to induce the predictive models, showing that the used method is indeed capable of exploration. Both glasses met the constraints of the work, which supports the proposed framework. Therefore, this new tool can be immediately used for accelerating the design of new glasses. These results are a stepping stone in the pathway of machine learning-guided design of novel glasses.  相似文献   
4.
Membrane electrode assembly (MEA) is considered a key component of a proton exchange membrane fuel cell (PEMFC). However, developing a new MEA to meet desired properties, such as operation under low-humidity conditions without a humidifier, is a time- and cost-consuming process. This study employs a machine-learning-based approach using K-nearest neighbor (KNN) and neural networks (NN) in the MEA development process by identifying a suitable catalyst layer (CL) recipe in MEA. Minimum redundancy maximum relevance and principal component analysis were implemented to specify the most important predictor and reduce the data dimension. The number of predictors was found to play an essential role in the accuracy of the KNN and NN models although the predictors have self-correlations. The KNN model with a K of 7 was found to minimize the model loss with a loss of 11.9%. The NN model constructed by three corresponding hidden layers with nine, eight, and nine nodes can achieve the lowest error of 0.1293 for the Pt catalyst and 0.031 for PVA as a good additive blending in the CL of the MEA. However, even if the error is low, the prediction of PVA seems to be inaccurate, regardless of the model structure. Therefore, the KNN model is more appropriate for CL recipe prediction.  相似文献   
5.
The evaluation of functional features of manufactured workpieces is based on GO- and NO-GO-test results, which are obtained by comparing measured geometric characteristics with nominal dimensions and tolerances specified by the designer. These geometrical specifications are based on a tolerancing system, which was originally defined for the function mating capability. Against the background of upcoming lots of other new functions (like reduction of flow resistance, light absorption, reduction of friction, diffraction of light, self-cleaning or mass transmission) are to be realized with our products – particularly by micro- and nano scaled features. If the verification process will deliver the prediction of the achievable degree of functionality, the usability of a part can be assessed more accurately and in consequence quality and economics can be improved. So, a new principle for tolerancing and verifying turns out to be necessary. In this paper the fundamental deficit of the actual tolerancing and specification systems GPS and ASME Y14.5 is derived and the path for enlarging the system by preposing a functional model is shown. To verify the functional capability of the workpieces an approach based on simulations done with the parameterized mathematical–physical model of the function is suggested. Advantages of this approach will be discussed and demonstrated by examples with microstructured inking rolls, crankshafts and injection valves.  相似文献   
6.
Although greedy algorithms possess high efficiency, they often receive suboptimal solutions of the ensemble pruning problem, since their exploration areas are limited in large extent. And another marked defect of almost all the currently existing ensemble pruning algorithms, including greedy ones, consists in: they simply abandon all of the classifiers which fail in the competition of ensemble selection, causing a considerable waste of useful resources and information. Inspired by these observations, an interesting greedy Reverse Reduce-Error (RRE) pruning algorithm incorporated with the operation of subtraction is proposed in this work. The RRE algorithm makes the best of the defeated candidate networks in a way that, the Worst Single Model (WSM) is chosen, and then, its votes are subtracted from the votes made by those selected components within the pruned ensemble. The reason is because, for most cases, the WSM might make mistakes in its estimation for the test samples. And, different from the classical RE, the near-optimal solution is produced based on the pruned error of all the available sequential subensembles. Besides, the backfitting step of RE algorithm is replaced with the selection step of a WSM in RRE. Moreover, the problem of ties might be solved more naturally with RRE. Finally, soft voting approach is employed in the testing to RRE algorithm. The performances of RE and RRE algorithms, and two baseline methods, i.e., the method which selects the Best Single Model (BSM) in the initial ensemble, and the method which retains all member networks of the initial ensemble (ALL), are evaluated on seven benchmark classification tasks under different initial ensemble setups. The results of the empirical investigation show the superiority of RRE over the other three ensemble pruning algorithms.  相似文献   
7.
The load applied to a machine tool feed drive changes during the machining process as material is removed. This load change alters the Coulomb friction of the feed drive. Because Coulomb friction accounts for a large part of the total friction the friction compensation control accuracy of the feed drives is limited if this nonlinear change in the applied load is not considered. This paper presents a new friction compensation method that estimates the machine tool load in real time and considers its effect on friction characteristics. A friction observer based on a Kalman filter with load estimation is proposed for friction compensation control considering the applied load change. A specially designed feed drive testbed that enables the applied load to be modified easily was constructed for experimental verification. Control performance and friction estimation accuracy are demonstrated experimentally using the testbed.  相似文献   
8.
随着教育信息化技术的发展,线上线下混合式教学模式已成为一种趋势。为了解决非全日制研究生课堂教学召集困难、效果不佳等问题,提出了基于“互联网+虚拟仿真技术”的线上线下混合式教学模式。本文以控制工程专业学位研究生为例,结合《现代电气控制技术》课程,探讨了线上线下混合式教学实现途径、师生互动方法、项目驱动案例教学法以及电气控制系统虚拟仿真实验平台的构建方法,切实提升非全日制研究生的培养质量。  相似文献   
9.
10.
Traditional Multiple Empirical Kernel Learning (MEKL) expands the expressions of the sample and brings better classification ability by using different empirical kernels to map the original data space into multiple kernel spaces. To make MEKL suit for the imbalanced problems, this paper introduces a weight matrix and a regularization term into MEKL. The weight matrix assigns high misclassification cost to the minority samples to balanced misclassification cost between minority and majority class. The regularization term named Majority Projection (MP) is used to make the classification hyperplane fit the distribution shape of majority samples and enlarge the between-class distance of minority and majority class. The contributions of this work are: (i) assigning high cost to minority samples to deal with imbalanced problems, (ii) introducing a new regularization term to concern the property of data distribution, (iii) and modifying the original PAC-Bayes bound to test the error upper bound of MEKL-MP. Through analyzing the experimental results, the proposed MEKL-MP is well suited to the imbalanced problems and has lower generalization risk in accordance with the value of PAC-Bayes bound.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号