首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2022篇
  免费   38篇
  国内免费   1篇
电工技术   8篇
化学工业   199篇
金属工艺   24篇
机械仪表   47篇
建筑科学   49篇
矿业工程   1篇
能源动力   28篇
轻工业   132篇
水利工程   5篇
石油天然气   4篇
无线电   132篇
一般工业技术   249篇
冶金工业   1066篇
原子能技术   5篇
自动化技术   112篇
  2022年   8篇
  2021年   13篇
  2019年   9篇
  2018年   21篇
  2017年   7篇
  2016年   24篇
  2015年   16篇
  2014年   21篇
  2013年   60篇
  2012年   27篇
  2011年   52篇
  2010年   33篇
  2009年   36篇
  2008年   34篇
  2007年   39篇
  2006年   33篇
  2005年   35篇
  2004年   15篇
  2003年   32篇
  2002年   23篇
  2001年   35篇
  2000年   22篇
  1999年   61篇
  1998年   278篇
  1997年   177篇
  1996年   129篇
  1995年   75篇
  1994年   77篇
  1993年   70篇
  1992年   34篇
  1991年   27篇
  1990年   37篇
  1989年   28篇
  1988年   32篇
  1987年   21篇
  1986年   17篇
  1985年   31篇
  1984年   31篇
  1983年   18篇
  1982年   18篇
  1981年   18篇
  1980年   23篇
  1979年   13篇
  1978年   13篇
  1977年   57篇
  1976年   77篇
  1975年   17篇
  1974年   12篇
  1973年   9篇
  1972年   9篇
排序方式: 共有2061条查询结果,搜索用时 93 毫秒
61.
As processor performance increases and memory cost decreases, system intelligence continues to move away from the CPU and into peripherals. Storage system designers use this trend toward excess computing power to perform more complex processing and optimizations inside storage devices. To date, such optimizations take place at relatively low levels of the storage protocol. Trends in storage density, mechanics, and electronics eliminate the hardware bottleneck and put pressure on interconnects and hosts to move data more efficiently. We propose using an active disk storage device that combines on-drive processing and memory with software downloadability to allow disks to execute application-level functions directly at the device. Moving portions of an application's processing to a storage device significantly reduces data traffic and leverages the parallelism already present in large systems, dramatically reducing the execution time for many basic data mining tasks  相似文献   
62.
Photometric reconstruction is the process of estimating the illumination and surface reflectance properties of an environment, given a geometric model of the scene and a set of photographs of its surfaces. For mixed-reality applications, such data is required if synthetic objects are to be correctly illuminated or if synthetic light sources are to be used to re-light the scene. Current methods of estimating such data are limited in the practical situations in which they can be applied, due to the fact that the geometric and radiometric models of the scene which are provided by the user must be complete, and that the position (and in some cases, intensity) of the light sources must also be specified a-priori. In this paper, a novel algorithm is presented which overcomes these constraints, and allows photometric data to be reconstructed in less restricted situations. This is achieved through the use of virtual light sources which mimic the effect of direct illumination from unknown luminaires, and indirect illumination reflected off unknown geometry. The intensity of these virtual light sources and the surface material properties are estimated using an iterative algorithm which attempts to match calculated radiance values to those observed in photographs. Results are presented for both synthetic and real scenes that show the quality of the reconstructed data and its use in off-line mixed-reality applications.  相似文献   
63.
We describe a novel approach for clustering collections of sets, and its application to the analysis and mining of categorical data. By “categorical data,” we mean tables with fields that cannot be naturally ordered by a metric – e.g., the names of producers of automobiles, or the names of products offered by a manufacturer. Our approach is based on an iterative method for assigning and propagating weights on the categorical values in a table; this facilitates a type of similarity measure arising from the co-occurrence of values in the dataset. Our techniques can be studied analytically in terms of certain types of non-linear dynamical systems. Received February 15, 1999 / Accepted August 15, 1999  相似文献   
64.
An analysis is presented for the mechanics of the hydrostatic extrusion of polymers in the solid phase through a conical die. The analysis starts with the lower bound solution proposed by Hoffman and Sachs and includes the effects of strain, strain rate and pressure on the deformation behaviour. It is proposed that this involves knowledge of the tensile stress-strain-strain rate relationships for each polymer, and it is shown how such information for polyethylene and polyoxymethylene can be used to explain the observed behaviour of these materials in the solid-phase extrusion process.List of Symbols die cone semi-angle - normal stress coefficient of tensile flow stress - d 0 initial diameter of billet - d f die exit diameter - axial strain rate (plug flow) - red shear strain (redundant strain) incurred on crossing die entry or exit boundary - N In R N=2ln (d 0/d f) nominal true strain in extrusion - f red+ N - L cot - normal stress coefficient of friction at die/billet interface - P experimental extrusion pressure=total work done per unit volume of material - P F work done per unit volume against billet-die friction - P I ideal deformation work done per unit volume of material - P R total redundant work done per unit volume - P W=PP I extra work required to overcome friction, pressure and redundant strain effects - r 0 initial radius of billet - r f die exit radius - r material radius at a point in the deformation zone - R N=(r 0/r f)2 nominal extrusion ratio - R=(r 0/r)2 extrusion ratio at a point in the deformation zone - 0() axial tensile flow stress - f() process flow stress path, related to die strain and strain rate fields - h tensile haul-off stress - x, y die stresses in deformation zone - 1, 2 shear yield stress of material at die entry and exit boundaries, respectively - v x axial velocity - v f extrudate velocity at die exit  相似文献   
65.
We compared two gastrin radioimmunoassay kits ("Immutope" kit, Squibb & Co.; "Gastrin R.I.A." kit, Schwarz/Mann) to the conventional gastrin radioimmunoassay of Yalow and Berson [Gastroenterology 58, 1 (1970)] as run by us and by a second reference laboratory. Although both kits were found to effectively discriminate above-normal and normal values for serum gastrin, they significantly underestimated very high values (greater than 1500 ng/liter). The Schwarz/Mann kit clearly had a superior quality label (lower nonspecific binding and higher specific activity) and a shorter incubation time. However, the 90-min incubation period cited for their kit caused overestimation of gastrin values in the lower range (5-300 ng/liter), which could be corrected by prolonging the incubation to 24 h. The Squibb antibody had fairly good cross reactivity to all gastrin species tested; the Schwarz/Mann antibody had poor affinity for natural human gastrin G34-II. Good correspondence was found for sera run by both reference laboratories (y = 0.96x + 10, r = 0.997), and values obtained with the Schwarz/Mann kit correlated best (+ 0.815) with those from the conventional radioimmunoassay procedure.  相似文献   
66.
Neurobiological and behavioral research indicates that place learning and response learning occur simultaneously, in parallel. Such findings seem to conflict with theories of associative learning in which different cues compete for learning. The authors conducted place + response training on a radial maze and then tested place learning and response learning separately by reconfiguring the maze in various ways. Consistent with the effects of manipulating place and response systems in the brain (M. G. Packard & J. L. McGaugh, 1996), well-trained rats showed strong place learning and strong response learning. Three experiments using associative blocking paradigms indicated that prior response learning interferes with place learning. Blocking and related tests can be used to better understand how memory systems interact during learning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
67.
Processing capacity-defined as the relative ability to perform mental work in a unit of time-is a critical construct in cognitive psychology and is central to theories of visual attention. The unambiguous use of the construct, experimentally and theoretically, has been hindered by both conceptual confusions and the use of measures that are at best only coarsely mapped to the construct. However, more than 25 years ago, J. T. Townsend and F. G. Ashby (1978) suggested that the hazard function on the response time (RT) distribution offered a number of conceptual advantages as a measure of capacity. The present study suggests that a set of statistical techniques, well-known outside the cognitive and perceptual literatures, offers the ability to perform hypothesis tests on RT-distribution hazard functions. These techniques are introduced, and their use is illustrated in application to data from the contingent attentional capture paradigm. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
68.
In this paper we present two simple, reliable and readily applicable methods for calibrating cantilevers and measuring the thickness of thin gold films. The spring constant calibration requires knowledge of the Young's modulus, density of the cantilever and resonant frequency. The thickness of thin gold layers was determined by measuring changes in the resonant frequency and Q-factor of beam shaped AFM cantilevers before and after coating.The techniques for measuring the spring constant and thin film thickness provide accuracy on the order of 10-15%.  相似文献   
69.
70.
There is increasing interest in the use of the percolation paradigm to analyse and predict the progress of disease spreading in spatially structured populations of animals and plants. The wider utility of the approach has been limited, however, by several restrictive assumptions, foremost of which is a strict requirement for simple nearest-neighbour transmission, in which the disease history of an individual is influenced only by that of its neighbours. In a recent paper, the percolation paradigm has been generalized to incorporate synergistic interactions in host infectivity and susceptibility, and the impact of these interactions on the invasive dynamics of an epidemic has been demonstrated. In the current paper, we elicit evidence that such synergistic interactions may underlie transmission dynamics in real-world systems by first formulating a model for the spread of a ubiquitous parasitic and saprotrophic fungus through replicated populations of nutrient sites and subsequently fitting and testing the model using data from experimental microcosms. Using Bayesian computational methods for model fitting, we demonstrate that synergistic interactions are necessary to explain the dynamics observed in the replicate experiments. The broader implications of this work in identifying disease-control strategies that deflect epidemics from invasive to non-invasive regimes are discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号