首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8998篇
  免费   826篇
  国内免费   727篇
电工技术   196篇
技术理论   1篇
综合类   361篇
化学工业   217篇
金属工艺   653篇
机械仪表   1443篇
建筑科学   237篇
矿业工程   70篇
能源动力   146篇
轻工业   311篇
水利工程   30篇
石油天然气   34篇
武器工业   19篇
无线电   604篇
一般工业技术   325篇
冶金工业   94篇
原子能技术   15篇
自动化技术   5795篇
  2024年   16篇
  2023年   200篇
  2022年   333篇
  2021年   426篇
  2020年   313篇
  2019年   235篇
  2018年   217篇
  2017年   238篇
  2016年   303篇
  2015年   367篇
  2014年   518篇
  2013年   559篇
  2012年   631篇
  2011年   776篇
  2010年   504篇
  2009年   570篇
  2008年   525篇
  2007年   587篇
  2006年   534篇
  2005年   402篇
  2004年   324篇
  2003年   317篇
  2002年   293篇
  2001年   202篇
  2000年   177篇
  1999年   161篇
  1998年   157篇
  1997年   132篇
  1996年   108篇
  1995年   94篇
  1994年   53篇
  1993年   62篇
  1992年   45篇
  1991年   27篇
  1990年   28篇
  1989年   23篇
  1988年   14篇
  1987年   11篇
  1986年   7篇
  1985年   6篇
  1984年   6篇
  1983年   8篇
  1982年   9篇
  1981年   6篇
  1980年   5篇
  1979年   2篇
  1978年   5篇
  1977年   4篇
  1976年   6篇
  1973年   2篇
排序方式: 共有10000条查询结果,搜索用时 46 毫秒
1.
Recently, a number of classification techniques have been introduced. However, processing large dataset in a reasonable time has become a major challenge. This made classification task more complex and expensive in calculation. Thus, the need for solutions to overcome these constraints such as field programmable gate arrays (FPGAs). In this paper, we give an overview of the various classification techniques. Then, we present the existing FPGA based implementation of these classification methods. After that, we investigate the confronted challenges and the optimizations strategies. Finally, we highlight the hardware accelerator architectures and tools for hardware design suggested to improve the FPGA implementation of classification methods.  相似文献   
2.
Membrane electrode assembly (MEA) is considered a key component of a proton exchange membrane fuel cell (PEMFC). However, developing a new MEA to meet desired properties, such as operation under low-humidity conditions without a humidifier, is a time- and cost-consuming process. This study employs a machine-learning-based approach using K-nearest neighbor (KNN) and neural networks (NN) in the MEA development process by identifying a suitable catalyst layer (CL) recipe in MEA. Minimum redundancy maximum relevance and principal component analysis were implemented to specify the most important predictor and reduce the data dimension. The number of predictors was found to play an essential role in the accuracy of the KNN and NN models although the predictors have self-correlations. The KNN model with a K of 7 was found to minimize the model loss with a loss of 11.9%. The NN model constructed by three corresponding hidden layers with nine, eight, and nine nodes can achieve the lowest error of 0.1293 for the Pt catalyst and 0.031 for PVA as a good additive blending in the CL of the MEA. However, even if the error is low, the prediction of PVA seems to be inaccurate, regardless of the model structure. Therefore, the KNN model is more appropriate for CL recipe prediction.  相似文献   
3.
Having accurate information about the hydrogen solubility in hydrocarbon fuels and feedstocks is very important in petroleum refineries and coal processing plants. In the present work, extreme gradient boosting (XGBoost), multi-layer perceptron (MLP) trained with Levenberg–Marquardt (LM) algorithm, adaptive boosting support vector regression (AdaBoost?SVR), and a memory-efficient gradient boosting tree system on adaptive compact distributions (LiteMORT) as four novel machine learning methods were used for estimating the hydrogen solubility in hydrocarbon fuels. To achieve this goal, a database containing 445 experimental data of hydrogen solubilities in 17 various hydrocarbon fuels/feedstocks was collected in wide-spread ranges of operating pressures and temperatures. These hydrocarbon fuels include petroleum fractions, refinery products, coal liquids, bitumen, and shale oil. Input parameters of the models are temperature and pressure along with density at 20 °C, molecular weight, and weight percentage of carbon (C) and hydrogen (H) of hydrocarbon fuels. XGBoost showed the highest accuracy compared to the other models with an overall mean absolute percent relative error of 1.41% and coefficient of determination (R2) of 0.9998. Also, seven equations of state (EOSs) were used to predict hydrogen solubilities in hydrocarbon fuels. The 2- and 3-parameter Soave-Redlich-Kwong EOS rendered the best estimates for hydrogen solubilities among the EOSs. Moreover, sensitivity analysis indicated that pressure owns the highest influence on hydrogen solubilities in hydrocarbon fuels and then temperature and hydrogen weight percent of the hydrocarbon fuels are ranked, respectively. Finally, Leverage approach results exhibited that the XGBoost model could be well trusted to estimate the hydrogen solubility in hydrocarbon fuels.  相似文献   
4.
The detection of retinal microaneurysms is crucial for the early detection of important diseases such as diabetic retinopathy. However, the detection of these lesions in retinography, the most widely available retinal imaging modality, remains a very challenging task. This is mainly due to the tiny size and low contrast of the microaneurysms in the images. Consequently, the automated detection of microaneurysms usually relies on extensive ad-hoc processing. In this regard, although microaneurysms can be more easily detected using fluorescein angiography, this alternative imaging modality is invasive and not adequate for regular preventive screening.In this work, we propose a novel deep learning methodology that takes advantage of unlabeled multimodal image pairs for improving the detection of microaneurysms in retinography. In particular, we propose a novel adversarial multimodal pre-training consisting in the prediction of fluorescein angiography from retinography using generative adversarial networks. This pre-training allows learning about the retina and the microaneurysms without any manually annotated data. Additionally, we also propose to approach the microaneurysms detection as a heatmap regression, which allows an efficient detection and precise localization of multiple microaneurysms. To validate and analyze the proposed methodology, we perform an exhaustive experimentation on different public datasets. Additionally, we provide relevant comparisons against different state-of-the-art approaches. The results show a satisfactory performance of the proposal, achieving an Average Precision of 64.90%, 31.36%, and 33.55% in the E-Ophtha, ROC, and DDR public datasets. Overall, the proposed approach outperforms existing deep learning alternatives while providing a more straightforward detection method that can be effectively applied to raw unprocessed retinal images.  相似文献   
5.
Machine learning algorithms have been widely used in mine fault diagnosis. The correct selection of the suitable algorithms is the key factor that affects the fault diagnosis. However, the impact of machine learning algorithms on the prediction performance of mine fault diagnosis models has not been fully evaluated. In this study, the windage alteration faults (WAFs) diagnosis models, which are based on K-nearest neighbor algorithm (KNN), multi-layer perceptron (MLP), support vector machine (SVM), and decision tree (DT), are constructed. Furthermore, the applicability of these four algorithms in the WAFs diagnosis is explored by a T-type ventilation network simulation experiment and the field empirical application research of Jinchuan No. 2 mine. The accuracy of the fault location diagnosis for the four models in both networks was 100%. In the simulation experiment, the mean absolute percentage error (MAPE) between the predicted values and the real values of the fault volume of the four models was 0.59%, 97.26%, 123.61%, and 8.78%, respectively. The MAPE for the field empirical application was 3.94%, 52.40%, 25.25%, and 7.15%, respectively. The results of the comprehensive evaluation of the fault location and fault volume diagnosis tests showed that the KNN model is the most suitable algorithm for the WAFs diagnosis, whereas the prediction performance of the DT model was the second-best. This study realizes the intelligent diagnosis of WAFs, and provides technical support for the realization of intelligent ventilation.  相似文献   
6.
Electrical energy is one of the key components for the development and sustainability of any nation. India is a developing country and blessed with a huge amount of renewable energy resources still there are various remote areas where the grid supply is rarely available. As electrical energy is the basic requirement, therefore it must be taken up on priority to exploit the available renewable energy resources integrated with storage devices like fuel cells and batteries for power generation and help the planners in providing the energy-efficient and alternative solution. This solution will not only meet electricity demand but also helps reduce greenhouse gas emissions as a result the efficient, sustainable and eco-friendly solution can be achieved which would contribute a lot to the smart grid environment. In this paper, a modified grey wolf optimizer approach is utilized to develop a hybrid microgrid based on available renewable energy resources considering modern power grid interactions. The proposed approach would be able to provide a robust and efficient microgrid that utilizes solar photovoltaic technology and wind energy conversion system. This approach integrates renewable resources with the meta-heuristic optimization algorithm for optimal dispatch of energy in grid-connected hybrid microgrid system. The proposed approach is mainly aimed to provide the optimal sizing of renewable energy-based microgrids based on the load profile according to time of use. To validate the proposed approach, a comparative study is also conducted through a case study and shows a significant savings of 30.88% and 49.99% of the rolling cost in comparison with fuzzy logic and mixed integer linear programming-based energy management system respectively.  相似文献   
7.
The evaluation of the volumetric accuracy of a machine tool is an open challenge in the industry, and a wide variety of technical solutions are available in the market and at research level. All solutions have advantages and disadvantages concerning which errors can be measured, the achievable uncertainty, the ease of implementation, possibility of machine integration and automation, the equipment cost and the machine occupation time, and it is not always straightforward which option to choose for each application. The need to ensure accuracy during the whole lifetime of the machine and the availability of monitoring systems developed following the Industry 4.0 trend are pushing the development of measurement systems that can be integrated in the machine to perform semi-automatic verification procedures that can be performed frequently by the machine user to monitor the condition of the machine. Calibrated artefact based calibration and verification solutions have an advantage in this field over laser based solutions in terms of cost and feasibility of machine integration, but they need to be optimized for each machine and customer requirements to achieve the required calibration uncertainty and minimize machine occupation time.This paper introduces a digital twin-based methodology to simulate all relevant effects in an artefact-based machine tool calibration procedure, from the machine itself with its expected error ranges, to the artefact geometry and uncertainty, artefact positions in the workspace, probe uncertainty, compensation model, etc. By parameterizing all relevant variables in the design of the calibration procedure, this simulation methodology can be used to analyse the effect of each design variable on the error mapping uncertainty, which is of great help in adapting the procedure to each specific machine and user requirements. The simulation methodology and the analysis possibilities are illustrated by applying it on a 3-axis milling machine tool.  相似文献   
8.
Engineering new glass compositions have experienced a sturdy tendency to move forward from (educated) trial-and-error to data- and simulation-driven strategies. In this work, we developed a computer program that combines data-driven predictive models (in this case, neural networks) with a genetic algorithm to design glass compositions with desired combinations of properties. First, we induced predictive models for the glass transition temperature (Tg) using a dataset of 45,302 compositions with 39 different chemical elements, and for the refractive index (nd) using a dataset of 41,225 compositions with 38 different chemical elements. Then, we searched for relevant glass compositions using a genetic algorithm informed by a design trend of glasses having high nd (1.7 or more) and low Tg (500 °C or less). Two candidate compositions suggested by the combined algorithms were selected and produced in the laboratory. These compositions are significantly different from those in the datasets used to induce the predictive models, showing that the used method is indeed capable of exploration. Both glasses met the constraints of the work, which supports the proposed framework. Therefore, this new tool can be immediately used for accelerating the design of new glasses. These results are a stepping stone in the pathway of machine learning-guided design of novel glasses.  相似文献   
9.
10.
Traditional Multiple Empirical Kernel Learning (MEKL) expands the expressions of the sample and brings better classification ability by using different empirical kernels to map the original data space into multiple kernel spaces. To make MEKL suit for the imbalanced problems, this paper introduces a weight matrix and a regularization term into MEKL. The weight matrix assigns high misclassification cost to the minority samples to balanced misclassification cost between minority and majority class. The regularization term named Majority Projection (MP) is used to make the classification hyperplane fit the distribution shape of majority samples and enlarge the between-class distance of minority and majority class. The contributions of this work are: (i) assigning high cost to minority samples to deal with imbalanced problems, (ii) introducing a new regularization term to concern the property of data distribution, (iii) and modifying the original PAC-Bayes bound to test the error upper bound of MEKL-MP. Through analyzing the experimental results, the proposed MEKL-MP is well suited to the imbalanced problems and has lower generalization risk in accordance with the value of PAC-Bayes bound.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号