首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   24662篇
  免费   1719篇
  国内免费   838篇
电工技术   669篇
技术理论   3篇
综合类   794篇
化学工业   1159篇
金属工艺   1345篇
机械仪表   3593篇
建筑科学   4188篇
矿业工程   375篇
能源动力   655篇
轻工业   1084篇
水利工程   276篇
石油天然气   395篇
武器工业   106篇
无线电   1387篇
一般工业技术   1524篇
冶金工业   518篇
原子能技术   149篇
自动化技术   8999篇
  2024年   26篇
  2023年   236篇
  2022年   362篇
  2021年   485篇
  2020年   478篇
  2019年   331篇
  2018年   383篇
  2017年   496篇
  2016年   712篇
  2015年   826篇
  2014年   1419篇
  2013年   1363篇
  2012年   1671篇
  2011年   1944篇
  2010年   1425篇
  2009年   1477篇
  2008年   1320篇
  2007年   1568篇
  2006年   1556篇
  2005年   1384篇
  2004年   1206篇
  2003年   1139篇
  2002年   954篇
  2001年   708篇
  2000年   661篇
  1999年   615篇
  1998年   482篇
  1997年   396篇
  1996年   327篇
  1995年   281篇
  1994年   204篇
  1993年   180篇
  1992年   132篇
  1991年   101篇
  1990年   68篇
  1989年   51篇
  1988年   45篇
  1987年   29篇
  1986年   28篇
  1985年   36篇
  1984年   28篇
  1983年   18篇
  1982年   10篇
  1981年   11篇
  1980年   9篇
  1979年   4篇
  1978年   7篇
  1977年   5篇
  1976年   11篇
  1971年   4篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
1.
Machine learning algorithms have been widely used in mine fault diagnosis. The correct selection of the suitable algorithms is the key factor that affects the fault diagnosis. However, the impact of machine learning algorithms on the prediction performance of mine fault diagnosis models has not been fully evaluated. In this study, the windage alteration faults (WAFs) diagnosis models, which are based on K-nearest neighbor algorithm (KNN), multi-layer perceptron (MLP), support vector machine (SVM), and decision tree (DT), are constructed. Furthermore, the applicability of these four algorithms in the WAFs diagnosis is explored by a T-type ventilation network simulation experiment and the field empirical application research of Jinchuan No. 2 mine. The accuracy of the fault location diagnosis for the four models in both networks was 100%. In the simulation experiment, the mean absolute percentage error (MAPE) between the predicted values and the real values of the fault volume of the four models was 0.59%, 97.26%, 123.61%, and 8.78%, respectively. The MAPE for the field empirical application was 3.94%, 52.40%, 25.25%, and 7.15%, respectively. The results of the comprehensive evaluation of the fault location and fault volume diagnosis tests showed that the KNN model is the most suitable algorithm for the WAFs diagnosis, whereas the prediction performance of the DT model was the second-best. This study realizes the intelligent diagnosis of WAFs, and provides technical support for the realization of intelligent ventilation.  相似文献   
2.
The evaluation of the volumetric accuracy of a machine tool is an open challenge in the industry, and a wide variety of technical solutions are available in the market and at research level. All solutions have advantages and disadvantages concerning which errors can be measured, the achievable uncertainty, the ease of implementation, possibility of machine integration and automation, the equipment cost and the machine occupation time, and it is not always straightforward which option to choose for each application. The need to ensure accuracy during the whole lifetime of the machine and the availability of monitoring systems developed following the Industry 4.0 trend are pushing the development of measurement systems that can be integrated in the machine to perform semi-automatic verification procedures that can be performed frequently by the machine user to monitor the condition of the machine. Calibrated artefact based calibration and verification solutions have an advantage in this field over laser based solutions in terms of cost and feasibility of machine integration, but they need to be optimized for each machine and customer requirements to achieve the required calibration uncertainty and minimize machine occupation time.This paper introduces a digital twin-based methodology to simulate all relevant effects in an artefact-based machine tool calibration procedure, from the machine itself with its expected error ranges, to the artefact geometry and uncertainty, artefact positions in the workspace, probe uncertainty, compensation model, etc. By parameterizing all relevant variables in the design of the calibration procedure, this simulation methodology can be used to analyse the effect of each design variable on the error mapping uncertainty, which is of great help in adapting the procedure to each specific machine and user requirements. The simulation methodology and the analysis possibilities are illustrated by applying it on a 3-axis milling machine tool.  相似文献   
3.
Engineering new glass compositions have experienced a sturdy tendency to move forward from (educated) trial-and-error to data- and simulation-driven strategies. In this work, we developed a computer program that combines data-driven predictive models (in this case, neural networks) with a genetic algorithm to design glass compositions with desired combinations of properties. First, we induced predictive models for the glass transition temperature (Tg) using a dataset of 45,302 compositions with 39 different chemical elements, and for the refractive index (nd) using a dataset of 41,225 compositions with 38 different chemical elements. Then, we searched for relevant glass compositions using a genetic algorithm informed by a design trend of glasses having high nd (1.7 or more) and low Tg (500 °C or less). Two candidate compositions suggested by the combined algorithms were selected and produced in the laboratory. These compositions are significantly different from those in the datasets used to induce the predictive models, showing that the used method is indeed capable of exploration. Both glasses met the constraints of the work, which supports the proposed framework. Therefore, this new tool can be immediately used for accelerating the design of new glasses. These results are a stepping stone in the pathway of machine learning-guided design of novel glasses.  相似文献   
4.
The ability of landscape architectural projects to mitigate the worst effects of climate change will depend upon designed ecological systems. These systems will be built with plants. Despite the recognition of ecology as an essential driver of landscapes, the professionals of landscape architecture too often lack the knowledge and practical skills to create robust vegetative systems. New approaches and tools are required. This article outlines principles and methods for designing biodiverse plant systems for urban sites. Planting methods that increase species richness, functional diversity, and spatial complexity are emphasized as a way of developing more resilient plantings. Selecting species with similar evolutionary adaptions to stress, disturbance, and competition—as well as creating multi-layered compositions of diverse plant morphologies—allows designers to create compatible, long-lived plant mixes. To balance the increased visual complexity of diverse plant mixes, the article explores design techniques to make plantings more appealing to the public. The strategies explored here are based on the projects, experience, and research of Phyto Studio, a Washington, D.C. based studio. The methods build on work described in the author’s book, Planting in a Post-Wild World, an exploration of how to create designed plant communities.  相似文献   
5.
Membrane electrode assembly (MEA) is considered a key component of a proton exchange membrane fuel cell (PEMFC). However, developing a new MEA to meet desired properties, such as operation under low-humidity conditions without a humidifier, is a time- and cost-consuming process. This study employs a machine-learning-based approach using K-nearest neighbor (KNN) and neural networks (NN) in the MEA development process by identifying a suitable catalyst layer (CL) recipe in MEA. Minimum redundancy maximum relevance and principal component analysis were implemented to specify the most important predictor and reduce the data dimension. The number of predictors was found to play an essential role in the accuracy of the KNN and NN models although the predictors have self-correlations. The KNN model with a K of 7 was found to minimize the model loss with a loss of 11.9%. The NN model constructed by three corresponding hidden layers with nine, eight, and nine nodes can achieve the lowest error of 0.1293 for the Pt catalyst and 0.031 for PVA as a good additive blending in the CL of the MEA. However, even if the error is low, the prediction of PVA seems to be inaccurate, regardless of the model structure. Therefore, the KNN model is more appropriate for CL recipe prediction.  相似文献   
6.
ABSTRACT

The digital age of the future is ‘not out there to be discovered’, but it needs to be ‘designed’. The design challenge has to address questions about how we want to live, work, and learn (as individuals and as communities) and what we value and appreciate, e.g.: reflecting on quality of life and creating inclusive societies. An overriding design trade-off for the digital age is whether new developments will increase the digital divide or will create more inclusive societies. Sustaining inclusive societies means allowing people of all ages and all abilities to exploit information technologies for personally meaningful activities. Meta-design fosters the design of socio-technical environments that end-user developers can modify and evolve at use time to improve their quality of life and favour their inclusion in the society. This paper describes three case studies in the domain of assistive technologies in which end users themselves cannot act as end-user developers, but someone else (e.g.: a caregiver or a clinician) must accept this role requiring multi-tiered architectures. The design trade-offs and requirements for meta-design identified in the context of the case studies and other researchers’ projects are described to inform the development of future socio-technical environments focused on social inclusion.  相似文献   
7.
In this paper, we investigate how adaptive operator selection techniques are able to efficiently manage the balance between exploration and exploitation in an evolutionary algorithm, when solving combinatorial optimization problems. We introduce new high level reactive search strategies based on a generic algorithm's controller that is able to schedule the basic variation operators of the evolutionary algorithm, according to the observed state of the search. Our experiments on SAT instances show that reactive search strategies improve the performance of the solving algorithm.  相似文献   
8.
In architectural design, surface shapes are commonly subject to geometric constraints imposed by material, fabrication or assembly. Rationalization algorithms can convert a freeform design into a form feasible for production, but often require design modifications that might not comply with the design intent. In addition, they only offer limited support for exploring alternative feasible shapes, due to the high complexity of the optimization algorithm.We address these shortcomings and present a computational framework for interactive shape exploration of discrete geometric structures in the context of freeform architectural design. Our method is formulated as a mesh optimization subject to shape constraints. Our formulation can enforce soft constraints and hard constraints at the same time, and handles equality constraints and inequality constraints in a unified way. We propose a novel numerical solver that splits the optimization into a sequence of simple subproblems that can be solved efficiently and accurately.Based on this algorithm, we develop a system that allows the user to explore designs satisfying geometric constraints. Our system offers full control over the exploration process, by providing direct access to the specification of the design space. At the same time, the complexity of the underlying optimization is hidden from the user, who communicates with the system through intuitive interfaces.  相似文献   
9.
Although greedy algorithms possess high efficiency, they often receive suboptimal solutions of the ensemble pruning problem, since their exploration areas are limited in large extent. And another marked defect of almost all the currently existing ensemble pruning algorithms, including greedy ones, consists in: they simply abandon all of the classifiers which fail in the competition of ensemble selection, causing a considerable waste of useful resources and information. Inspired by these observations, an interesting greedy Reverse Reduce-Error (RRE) pruning algorithm incorporated with the operation of subtraction is proposed in this work. The RRE algorithm makes the best of the defeated candidate networks in a way that, the Worst Single Model (WSM) is chosen, and then, its votes are subtracted from the votes made by those selected components within the pruned ensemble. The reason is because, for most cases, the WSM might make mistakes in its estimation for the test samples. And, different from the classical RE, the near-optimal solution is produced based on the pruned error of all the available sequential subensembles. Besides, the backfitting step of RE algorithm is replaced with the selection step of a WSM in RRE. Moreover, the problem of ties might be solved more naturally with RRE. Finally, soft voting approach is employed in the testing to RRE algorithm. The performances of RE and RRE algorithms, and two baseline methods, i.e., the method which selects the Best Single Model (BSM) in the initial ensemble, and the method which retains all member networks of the initial ensemble (ALL), are evaluated on seven benchmark classification tasks under different initial ensemble setups. The results of the empirical investigation show the superiority of RRE over the other three ensemble pruning algorithms.  相似文献   
10.
This paper presents a stochastic performance modelling approach that can be used to optimise design and operational reliability of complex chemical engineering processes. The framework can be applied to processes comprising multiple units, including the cases where closed form process performance functions are unavailable or difficult to derive from first principles, which is often the case in practice. An interface that facilitates automated two-way communication between Matlab® and process simulation environment is used to generate large process responses. The resulting constrained optimisation problem is solved using both Monte Carlo Simulation (MCS) and First Order Reliability Method (FORM); providing a wide range of stochastic process performance measures. Adding such capabilities to traditional deterministic process simulators provides a more informed basis for selecting optimum design factors; giving a simple way of enhancing overall process reliability and cost-efficiency. Two case study systems are considered to highlight the applicability and benefits of the approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号