全文获取类型
收费全文 | 18976篇 |
免费 | 677篇 |
国内免费 | 44篇 |
专业分类
电工技术 | 237篇 |
综合类 | 24篇 |
化学工业 | 3739篇 |
金属工艺 | 468篇 |
机械仪表 | 397篇 |
建筑科学 | 948篇 |
矿业工程 | 108篇 |
能源动力 | 600篇 |
轻工业 | 1634篇 |
水利工程 | 172篇 |
石油天然气 | 68篇 |
无线电 | 1457篇 |
一般工业技术 | 3364篇 |
冶金工业 | 3424篇 |
原子能技术 | 116篇 |
自动化技术 | 2941篇 |
出版年
2023年 | 123篇 |
2022年 | 210篇 |
2021年 | 330篇 |
2020年 | 234篇 |
2019年 | 270篇 |
2018年 | 368篇 |
2017年 | 378篇 |
2016年 | 413篇 |
2015年 | 338篇 |
2014年 | 513篇 |
2013年 | 1218篇 |
2012年 | 825篇 |
2011年 | 1166篇 |
2010年 | 799篇 |
2009年 | 829篇 |
2008年 | 930篇 |
2007年 | 871篇 |
2006年 | 759篇 |
2005年 | 743篇 |
2004年 | 594篇 |
2003年 | 541篇 |
2002年 | 526篇 |
2001年 | 332篇 |
2000年 | 303篇 |
1999年 | 328篇 |
1998年 | 355篇 |
1997年 | 298篇 |
1996年 | 326篇 |
1995年 | 303篇 |
1994年 | 289篇 |
1993年 | 262篇 |
1992年 | 259篇 |
1991年 | 161篇 |
1990年 | 230篇 |
1989年 | 203篇 |
1988年 | 167篇 |
1987年 | 166篇 |
1986年 | 156篇 |
1985年 | 187篇 |
1984年 | 197篇 |
1983年 | 162篇 |
1982年 | 170篇 |
1981年 | 168篇 |
1980年 | 143篇 |
1979年 | 165篇 |
1978年 | 123篇 |
1977年 | 121篇 |
1976年 | 162篇 |
1975年 | 125篇 |
1973年 | 108篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
81.
The surface of glass beads of average particle size 100m was modified (a) by incorporating extra hydroxyl groups by chemcial treatment, and (b) by applying a thin coating of polymethacrylic acid (PMA) on the glass surface. The corresponding chemical changes were investigated using infrared spectroscopy. The tensile behaviour of a glass bead-filled PVC composite prepared with surface-modified glass beads, showed the following effects: (a) hydroxyl groups incorporated on to the glass surface did not affect the glass-PVC interface and hence did not change the tensile behaviour of the composite; (b) PMA coating on the glass surface caused improvement in the tensile behaviour in the low strain region and deterioration in the high region. An SEM study of the fractured surface suggested debonding at the glass-PVC interface in the first case, and failure of the PVC-PMA interface in the second case. 相似文献
82.
Normalized Difference Vegetation Index (NDVI), which is a measure of vegetation vigour, and lake water levels respond variably
to precipitation and its deficiency. For a given lake catchment, NDVI may have the ability to depict localized natural variability in water levels in response
to weather patterns. This information may be used to decipher natural from unnatural variations of a given lake’s surface.
This study evaluates the potential of using NDVI and its associated derivatives (VCI (vegetation condition index), SVI (standardised
vegetation index), AINDVI (annually integrated NDVI), green vegetation function (F
g
), and NDVIA (NDVI anomaly)) to depict Lake Victoria’s water levels. Thirty years of monthly mean water levels and a portion
of the Global Inventory Modelling and Mapping Studies (GIMMS) AVHRR (Advanced Very High Resolution Radiometer) NDVI datasets
were used. Their aggregate data structures and temporal co-variabilities were analysed using GIS/spatial analysis tools. Locally,
NDVI was found to be more sensitive to drought (i.e., responded more strongly to reduced precipitation) than to water levels.
It showed a good ability to depict water levels one-month in advance, especially in moderate to low precipitation years. SVI and SWL (standardized water
levels) used in association with AINDVI and AMWLA (annual mean water levels anomaly) readily identified high precipitation
years, which are also when NDVI has a low ability to depict water levels. NDVI also appears to be able to highlight unnatural variations
in water levels. We propose an iterative approach for the better use of NDVI, which may be useful in developing an early warning mechanisms for the management of lake Victoria
and other Lakes with similar characteristics. 相似文献
83.
Here we describe a novel, hand-held reference point indentation (RPI), instrument that is designed for clinical measurements of bone material properties in living patients. This instrument differs from previous RPI instruments in that it requires neither a reference probe nor removal of the periosteum that covers the bone, thus significantly simplifying its use in patient testing. After describing the instrument, we discuss five guidelines for optimal and reproducible results. These are: (1) the angle between the normal to the surface and the axis of the instrument should be less than 10°, (2) the compression of the main spring to trigger the device must be performed slowly (>1 s), (3) the probe tip should be sharper than 10 μm; however, a normalized parameter with a calibration phantom can correct for dull tips up to a 100 μm radius, (4) the ambient room temperature should be between 4?°C and 37 °C, and (5) the effective mass of the bone or material under test must exceed 1 kg, or if under 1 kg, the specimen should be securely anchored in a fixation device with sufficient mass (which is not a requirement of previous RPI instruments). Our experience is that a person can be trained with these guidelines in about 5 min and thereafter obtain accurate and reproducible results. The portability, ease of use, and minimal training make this instrument suitable to measure bone material properties in a clinical setting. 相似文献
84.
随着对能源问题和环境问题的日益重视,各种可替代传统石化能源方式的新能源方式纷纷涌现。其中,太阳能是一种资源最丰富的可再生清洁能源,而且可以认为是一种用之不竭的能源方式。近年来,太阳能发电技术也进入快速发展的时期。而且与新兴的纳米技术相结合,有望为绿色能源的发展带来革命性的变化。 相似文献
85.
Paul J.M. Thomas D.E. Bobrek A. 《Very Large Scale Integration (VLSI) Systems, IEEE Transactions on》2006,14(8):868-880
Single-chip heterogeneous multiprocessors (SCHMs) are arising to meet the computational demands of portable and handheld devices. These computing systems are not fully custom designs traditionally targeted by the design automation community, general-purpose designs traditionally targeted by the computer architecture community, nor pure embedded designs traditionally targeted by the real-time community. An entirely new design philosophy will be needed for this hybrid class of computing. The programming of the device will be drawn from a narrower set of applications with execution that persists in the system over a longer period of time than for general-purpose programming. However, the devices will still be programmable, not only at the level of the individual processing element, but across multiple processing elements and even the entire chip. The design of other programmable single chip computers has enjoyed an era where the design tradeoffs could be captured in simulators such as SimpleScalar and performance could be evaluated to the SPEC benchmarks. Motivated by this, we describe new benchmark-based design strategies for SCHMs which we refer to as scenario-oriented design. We include an example and results. 相似文献
86.
Paul F. Mlakar Donald O. Dusenberry James R. Harris Gerald Haynes Long T. Phan Mete A. Sozen 《Canadian Metallurgical Quarterly》2005,19(3):197-205
On September 11, 2001, an airliner was intentionally crashed into the Pentagon. It struck at the first elevated slab on the west wall, and slid approximately 310?ft (94.5?m) diagonally into the building. The force of the collision demolished numerous columns and the fa?ade of the exterior wall, and induced damage to first-floor columns and the first elevated slab over an area approximately 90?ft (27.4?m) wide and 310?ft (94.5?m) long. None of the building collapsed immediately. The portion that remained standing, even after an intense fire, sustained substantial damage at the first-floor level. 相似文献
87.
The present study presents a methodology for detailed reliability analysis of nuclear containment without metallic liners against aircraft crash. For this purpose, a nonlinear limit state function has been derived using violation of tolerable crack width as failure criterion. This criterion has been considered as failure criterion because radioactive radiations may come out if size of crack becomes more than the tolerable crack width. The derived limit state uses the response of containment that has been obtained from a detailed dynamic analysis of nuclear containment under an impact of a large size Boeing jet aircraft. Using this response in conjunction with limit state function, the reliabilities and probabilities of failures are obtained at a number of vulnerable locations employing an efficient first-order reliability method (FORM). These values of reliability and probability of failure at various vulnerable locations are then used for the estimation of conditional and annual reliabilities of nuclear containment as a function of its location from the airport. To study the influence of the various random variables on containment reliability the sensitivity analysis has been performed. Some parametric studies have also been included to obtain the results of field and academic interest. 相似文献
88.
89.
This paper proposes that self-deception results from the emotional coherence of beliefs with subjective goals. We apply the HOTCO computational model of emotional coherence to simulate a rich case of self-deception from Hawthorne's The Scarlet Letter.We argue that this model is more psychologically realistic than other available accounts of self-deception, and discuss related issues such as wishful thinking, intention, and the division of the self. 相似文献
90.
Symmetric multiprocessor systems are increasingly common, not only as high-throughput servers, but as a vehicle for executing
a single application in parallel in order to reduce its execution latency. This article presents Pedigree, a compilation tool
that employs a new partitioning heuristic based on the program dependence graph (PDG). Pedigree creates overlapping, potentially
interdependent threads, each executing on a subset of the SMP processors that matches the thread’s available parallelism.
A unified framework is used to build threads from procedures, loop nests, loop iterations, and smaller constructs. Pedigree
does not require any parallel language support; it is post-compilation tool that reads in object code. The SDIO Signal and
Data Processing Benchmark Suite has been selected as an example of real-time, latency-sensitive code. Its coarse-grained data
flow parallelism is naturally exploited by Pedigree to achieve speedups of 1.63×/2.13× (mean/max) and 1.71×/2.41× on two and
four processors, respectively. There is roughly a 20% improvement over existing techniques that exploit only data parallelism.
By exploiting the unidirectional flow of data for coarse-grained pipelining, the synchronization overhead is typically limited
to less than 6% for synchronization latency of 100 cycles, and less than 2% for 10 cycles.
This research was supported by ONR contract numbers N00014-91-J-1518 and N00014-96-1-0347. We would like to thank the Pittsburgh
Supercomputing Center for use of their Alpha systems. 相似文献