全文获取类型
收费全文 | 5167篇 |
免费 | 319篇 |
国内免费 | 7篇 |
专业分类
电工技术 | 35篇 |
综合类 | 2篇 |
化学工业 | 1429篇 |
金属工艺 | 60篇 |
机械仪表 | 89篇 |
建筑科学 | 183篇 |
矿业工程 | 9篇 |
能源动力 | 179篇 |
轻工业 | 1116篇 |
水利工程 | 50篇 |
石油天然气 | 16篇 |
无线电 | 236篇 |
一般工业技术 | 773篇 |
冶金工业 | 265篇 |
原子能技术 | 18篇 |
自动化技术 | 1033篇 |
出版年
2024年 | 9篇 |
2023年 | 61篇 |
2022年 | 183篇 |
2021年 | 273篇 |
2020年 | 158篇 |
2019年 | 204篇 |
2018年 | 233篇 |
2017年 | 200篇 |
2016年 | 227篇 |
2015年 | 173篇 |
2014年 | 248篇 |
2013年 | 427篇 |
2012年 | 364篇 |
2011年 | 408篇 |
2010年 | 291篇 |
2009年 | 257篇 |
2008年 | 243篇 |
2007年 | 240篇 |
2006年 | 153篇 |
2005年 | 140篇 |
2004年 | 139篇 |
2003年 | 108篇 |
2002年 | 93篇 |
2001年 | 55篇 |
2000年 | 56篇 |
1999年 | 55篇 |
1998年 | 118篇 |
1997年 | 65篇 |
1996年 | 52篇 |
1995年 | 44篇 |
1994年 | 34篇 |
1993年 | 27篇 |
1992年 | 25篇 |
1991年 | 21篇 |
1990年 | 14篇 |
1989年 | 15篇 |
1988年 | 10篇 |
1987年 | 3篇 |
1985年 | 9篇 |
1984年 | 6篇 |
1983年 | 7篇 |
1982年 | 6篇 |
1981年 | 4篇 |
1980年 | 5篇 |
1979年 | 2篇 |
1978年 | 5篇 |
1977年 | 9篇 |
1976年 | 5篇 |
1975年 | 2篇 |
1973年 | 2篇 |
排序方式: 共有5493条查询结果,搜索用时 0 毫秒
71.
Sandra García‐Bustos Mónica Mite Francisco Vera 《Quality and Reliability Engineering International》2016,32(5):1741-1755
This article analyzes the simultaneous control of several correlated Poisson variables by using the Variable Dimension Linear Combination of Poisson Variables (VDLCP) control chart, which is a variable dimension version of the LCP chart. This control chart uses as test statistic, the linear combination of correlated Poisson variables in an adaptive way, i.e. it monitors either p1 or p variables (p1 < p) depending on the last statistic value. To analyze the performance of this chart, we have developed software that finds the best parameters, optimizing the out‐of‐control average run length (ARL) for a shift that the practitioner wishes to detect as quickly as possible, restricted to a fixed value for in‐control ARL. Markov chains and genetic algorithms were used in developing this software. The results show performance improvement compared to the LCP chart. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
72.
Burestedt E Narvaez A Ruzgas T Gorton L Emnéus J Domínguez E Marko-Varga G 《Analytical chemistry》1996,68(9):1605-1611
The response currents obtained for tyrosinase-modified Teflon/graphite, carbon paste, and solid graphite electrodes in the presence of catechol are analyzed primarily using rotating disk electrode experiments. The rate-limiting steps, such as the electrochemical reduction of o-quinones and the enzymatic reduction of oxygen as well as the enzymatic oxidation of catechol, are theoretically considered and experimentally demonstrated for the different electrode configurations. 相似文献
73.
The localization of the components of an object near to a device before obtaining the real interaction is usually determined by means of a proximity measurement to the device of the object’s features. In order to do this efficiently, hierarchical decompositions are used, so that the features of the objects are classified into several types of cells, usually rectangular.In this paper we propose a solution based on the classification of a set of points situated on the device in a little-known spatial decomposition named tetra-tree. Using this type of spatial decomposition gives us several quantitative and qualitative properties that allow us a more realistic and intuitive visual interaction, as well as the possibility of selecting inaccessible components. These features could be used in virtual sculpting or accessibility tasks.In order to show these properties we have compared an interaction system based on tetra-trees to one based on octrees. 相似文献
74.
Artur J. Lemonte Francisco Cribari-Neto 《Computational statistics & data analysis》2010,54(5):1307-718
The Birnbaum-Saunders regression model is commonly used in reliability studies. We address the issue of performing inference in this class of models when the number of observations is small. Our simulation results suggest that the likelihood ratio test tends to be liberal when the sample size is small. We obtain a correction factor which reduces the size distortion of the test. Also, we consider a parametric bootstrap scheme to obtain improved critical values and improved p-values for the likelihood ratio test. The numerical results show that the modified tests are more reliable in finite samples than the usual likelihood ratio test. We also present an empirical application. 相似文献
75.
Francisco J. Pino Author Vitae Oscar Pedreira Author Vitae 《Journal of Systems and Software》2010,83(10):1662-1677
For software process improvement - SPI - there are few small organizations using models that guide the management and deployment of their improvement initiatives. This is largely because a lot of these models do not consider the special characteristics of small businesses, nor the appropriate strategies for deploying an SPI initiative in this type of organization. It should also be noted that the models which direct improvement implementation for small settings do not present an explicit process with which to organize and guide the internal work of the employees involved in the implementation of the improvement opportunities. In this paper we propose a lightweight process, which takes into account appropriate strategies for this type of organization. Our proposal, known as a “Lightweight process to incorporate improvements”, uses the philosophy of the Scrum agile method, aiming to give detailed guidelines for supporting the management and performance of the incorporation of improvement opportunities within processes and their putting into practice in small companies. We have applied the proposed process in two small companies by means of the case study research method, and from the initial results, we have observed that it is indeed suitable for small businesses. 相似文献
76.
77.
In this paper we present adaptive algorithms for solving the uniform continuous piecewise affine approximation problem (UCPA)
in the case of Lipschitz or convex functions. The algorithms are based on the tree approximation (adaptive splitting) procedure.
The uniform convergence is achieved by means of global optimization techniques for obtaining tight upper bounds of the local
error estimate (splitting criterion). We give numerical results in the case of the function distance to 2D and 3D geometric
bodies. The resulting trees can retrieve the values of the target function in a fast way. 相似文献
78.
A new fast prototype selection method based on clustering 总被引:2,自引:1,他引:1
J. Arturo Olvera-López J. Ariel Carrasco-Ochoa J. Francisco Martínez-Trinidad 《Pattern Analysis & Applications》2010,13(2):131-141
In supervised classification, a training set T is given to a classifier for classifying new prototypes. In practice, not all information in T is useful for classifiers, therefore, it is convenient to discard irrelevant prototypes from T. This process is known as prototype selection, which is an important task for classifiers since through this process the
time for classification or training could be reduced. In this work, we propose a new fast prototype selection method for large
datasets, based on clustering, which selects border prototypes and some interior prototypes. Experimental results showing
the performance of our method and comparing accuracy and runtimes against other prototype selection methods are reported. 相似文献
79.
Francisco J. Rubio Francisco J. Valero Jose Luis Suñer Vicente Mata 《Asian journal of control》2010,12(4):468-479
This paper addresses the solution of smooth trajectory planning for industrial robots in environments with obstacles using a direct method, creating the trajectory gradually as the robot moves. The presented method deals with the uncertainties associated with the lack of knowledge of kinematic properties of intermediate via‐points since they are generated as the algorithm evolves looking for the solution. Several cost functions are also proposed, which use the time that has been calculated to guide the robot motion. The method has been applied successfully to a PUMA 560 robot and four operational parameters (execution time, computational time, distance travelled and number of configurations) have been computed to study the properties and influence of each cost function on the trajectory obtained. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society 相似文献
80.
Francisco Louzada Osvaldo Anacleto-Junior Cecilia Candolo Josimara Mazucheli 《Expert systems with applications》2011,38(10):12717-12720
Credit scoring modelling comprises one of the leading formal tools for supporting the granting of credit. Its core objective consists of the generation of a score by means of which potential clients can be listed in the order of the probability of default. A critical factor is whether a credit scoring model is accurate enough in order to provide correct classification of the client as a good or bad payer. In this context the concept of bootstraping aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the fitted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper we propose a new bagging-type variant procedure, which we call poly-bagging, consisting of combining predictors over a succession of resamplings. The study is derived by credit scoring modelling. The proposed poly-bagging procedure was applied to some different artificial datasets and to a real granting of credit dataset up to three successions of resamplings. We observed better classification accuracy for the two-bagged and the three-bagged models for all considered setups. These results lead to a strong indication that the poly-bagging approach may promote improvement on the modelling performance measures, while keeping a flexible and straightforward bagging-type structure easy to implement. 相似文献