首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4353篇
  免费   333篇
  国内免费   7篇
电工技术   31篇
综合类   2篇
化学工业   1181篇
金属工艺   44篇
机械仪表   79篇
建筑科学   167篇
矿业工程   8篇
能源动力   165篇
轻工业   1003篇
水利工程   45篇
石油天然气   11篇
无线电   217篇
一般工业技术   648篇
冶金工业   99篇
原子能技术   18篇
自动化技术   975篇
  2024年   5篇
  2023年   51篇
  2022年   53篇
  2021年   255篇
  2020年   151篇
  2019年   192篇
  2018年   196篇
  2017年   178篇
  2016年   199篇
  2015年   154篇
  2014年   231篇
  2013年   379篇
  2012年   327篇
  2011年   379篇
  2010年   263篇
  2009年   240篇
  2008年   221篇
  2007年   218篇
  2006年   140篇
  2005年   130篇
  2004年   124篇
  2003年   97篇
  2002年   82篇
  2001年   46篇
  2000年   45篇
  1999年   42篇
  1998年   54篇
  1997年   41篇
  1996年   25篇
  1995年   24篇
  1994年   18篇
  1993年   15篇
  1992年   17篇
  1991年   20篇
  1990年   11篇
  1989年   13篇
  1988年   5篇
  1987年   3篇
  1985年   7篇
  1984年   5篇
  1983年   6篇
  1982年   5篇
  1981年   2篇
  1980年   2篇
  1979年   2篇
  1978年   4篇
  1977年   7篇
  1976年   2篇
  1975年   2篇
  1973年   2篇
排序方式: 共有4693条查询结果,搜索用时 259 毫秒
101.
This article analyzes the simultaneous control of several correlated Poisson variables by using the Variable Dimension Linear Combination of Poisson Variables (VDLCP) control chart, which is a variable dimension version of the LCP chart. This control chart uses as test statistic, the linear combination of correlated Poisson variables in an adaptive way, i.e. it monitors either p1 or p variables (p1 < p) depending on the last statistic value. To analyze the performance of this chart, we have developed software that finds the best parameters, optimizing the out‐of‐control average run length (ARL) for a shift that the practitioner wishes to detect as quickly as possible, restricted to a fixed value for in‐control ARL. Markov chains and genetic algorithms were used in developing this software. The results show performance improvement compared to the LCP chart. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
102.
103.
104.
105.
In order to find out biochemical markers for the botanical origin of heather (Erica) honey, the phenolic metabolites present in heather floral nectar, collected from the honey-stomach of bees gathering nectar from these flowers, were analysed. The flavonoid fraction of nectar contained four main flavonoids. Two of them were quercetin and kaempferol 3-rhamnosides, and the other two were tentatively identified as myricetin 3-methyl ether and isorhamnetin 3-rhamnosides. Since the natural glycosides are hydrolysed by bee enzymes to render the corresponding aglycones, which are the metabolites detected in honey, acid hydrolysis of the nectar glycosides was achieved. The aglycones quercetin, myricetin 3-methyl ether, kaempferol and isorhamnetin were identified, as well as the gallic acid derivative ellagic acid. The analysis of Portuguese heather honey samples showed that ellagic acid was present in all the samples in significant amounts ranging between 100 g and 600 g per 100 g honey. The other nectar-derived flavonoids were also present, although some of them in very variable amounts. Ellagic acid and myricetin 3-methyl ether, which have not been detected in any of the monofloral honey samples investigated so far, with the only exception being a French honey sample of the botanically relatedCalluna (Ericaceae) which also contained ellagic acid, seem to be the most useful potential markers for the floral origin of heather honey. However, more detailed and extensive investigations are needed to prove the utility of these markers.  相似文献   
106.
In the classification framework there are problems in which the number of examples per class is not equitably distributed, formerly known as imbalanced data sets. This situation is a handicap when trying to identify the minority classes, as the learning algorithms are not usually adapted to such characteristics. An usual approach to deal with the problem of imbalanced data sets is the use of a preprocessing step. In this paper we analyze the usefulness of the data complexity measures in order to evaluate the behavior of undersampling and oversampling methods. Two classical learning methods, C4.5 and PART, are considered over a wide range of imbalanced data sets built from real data. Specifically, oversampling techniques and an evolutionary undersampling one have been selected for the study. We extract behavior patterns from the results in the data complexity space defined by the measures, coding them as intervals. Then, we derive rules from the intervals that describe both good or bad behaviors of C4.5 and PART for the different preprocessing approaches, thus obtaining a complete characterization of the data sets and the differences between the oversampling and undersampling results.  相似文献   
107.
The localization of the components of an object near to a device before obtaining the real interaction is usually determined by means of a proximity measurement to the device of the object’s features. In order to do this efficiently, hierarchical decompositions are used, so that the features of the objects are classified into several types of cells, usually rectangular.In this paper we propose a solution based on the classification of a set of points situated on the device in a little-known spatial decomposition named tetra-tree. Using this type of spatial decomposition gives us several quantitative and qualitative properties that allow us a more realistic and intuitive visual interaction, as well as the possibility of selecting inaccessible components. These features could be used in virtual sculpting or accessibility tasks.In order to show these properties we have compared an interaction system based on tetra-trees to one based on octrees.  相似文献   
108.
The Birnbaum-Saunders regression model is commonly used in reliability studies. We address the issue of performing inference in this class of models when the number of observations is small. Our simulation results suggest that the likelihood ratio test tends to be liberal when the sample size is small. We obtain a correction factor which reduces the size distortion of the test. Also, we consider a parametric bootstrap scheme to obtain improved critical values and improved p-values for the likelihood ratio test. The numerical results show that the modified tests are more reliable in finite samples than the usual likelihood ratio test. We also present an empirical application.  相似文献   
109.
Constrained linear regression models for symbolic interval-valued variables   总被引:3,自引:0,他引:3  
This paper introduces an approach to fitting a constrained linear regression model to interval-valued data. Each example of the learning set is described by a feature vector for which each feature value is an interval. The new approach fits a constrained linear regression model on the midpoints and range of the interval values assumed by the variables in the learning set. The prediction of the lower and upper boundaries of the interval value of the dependent variable is accomplished from its midpoint and range, which are estimated from the fitted linear regression models applied to the midpoint and range of each interval value of the independent variables. This new method shows the importance of range information in prediction performance as well as the use of inequality constraints to ensure mathematical coherence between the predicted values of the lower () and upper () boundaries of the interval. The authors also propose an expression for the goodness-of-fit measure denominated determination coefficient. The assessment of the proposed prediction method is based on the estimation of the average behavior of the root-mean-square error and square of the correlation coefficient in the framework of a Monte Carlo experiment with different data set configurations. Among other aspects, the synthetic data sets take into account the dependence, or lack thereof, between the midpoint and range of the intervals. The bias produced by the use of inequality constraints over the vector of parameters is also examined in terms of the mean-square error of the parameter estimates. Finally, the approaches proposed in this paper are applied to a real data set and performances are compared.  相似文献   
110.
For software process improvement - SPI - there are few small organizations using models that guide the management and deployment of their improvement initiatives. This is largely because a lot of these models do not consider the special characteristics of small businesses, nor the appropriate strategies for deploying an SPI initiative in this type of organization. It should also be noted that the models which direct improvement implementation for small settings do not present an explicit process with which to organize and guide the internal work of the employees involved in the implementation of the improvement opportunities. In this paper we propose a lightweight process, which takes into account appropriate strategies for this type of organization. Our proposal, known as a “Lightweight process to incorporate improvements”, uses the philosophy of the Scrum agile method, aiming to give detailed guidelines for supporting the management and performance of the incorporation of improvement opportunities within processes and their putting into practice in small companies. We have applied the proposed process in two small companies by means of the case study research method, and from the initial results, we have observed that it is indeed suitable for small businesses.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号