首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2980篇
  免费   94篇
  国内免费   5篇
电工技术   46篇
综合类   6篇
化学工业   1054篇
金属工艺   65篇
机械仪表   65篇
建筑科学   152篇
矿业工程   9篇
能源动力   56篇
轻工业   194篇
水利工程   9篇
石油天然气   3篇
无线电   203篇
一般工业技术   452篇
冶金工业   265篇
原子能技术   37篇
自动化技术   463篇
  2023年   23篇
  2022年   35篇
  2021年   68篇
  2020年   39篇
  2019年   39篇
  2018年   39篇
  2017年   40篇
  2016年   76篇
  2015年   55篇
  2014年   71篇
  2013年   115篇
  2012年   119篇
  2011年   158篇
  2010年   127篇
  2009年   112篇
  2008年   129篇
  2007年   108篇
  2006年   123篇
  2005年   87篇
  2004年   86篇
  2003年   65篇
  2002年   45篇
  2001年   54篇
  2000年   52篇
  1999年   58篇
  1998年   94篇
  1997年   62篇
  1996年   50篇
  1995年   46篇
  1994年   39篇
  1993年   37篇
  1992年   29篇
  1991年   35篇
  1990年   24篇
  1989年   22篇
  1987年   25篇
  1985年   25篇
  1983年   23篇
  1980年   26篇
  1978年   20篇
  1977年   22篇
  1976年   26篇
  1975年   26篇
  1974年   69篇
  1973年   54篇
  1972年   54篇
  1971年   53篇
  1970年   50篇
  1969年   45篇
  1968年   38篇
排序方式: 共有3079条查询结果,搜索用时 15 毫秒
91.
We report on the experimental realization of an ultrahigh vacuum (UHV) indium sealing between a conflat knife edge and an optical window. The sealing requires a very low clamping force and thus allows for the use of very thin and fragile windows.  相似文献   
92.
It is a well-known fact that Hebbian learning is inherently unstable because of its self-amplifying terms: the more a synapse grows, the stronger the postsynaptic activity, and therefore the faster the synaptic growth. This unwanted weight growth is driven by the autocorrelation term of Hebbian learning where the same synapse drives its own growth. On the other hand, the cross-correlation term performs actual learning where different inputs are correlated with each other. Consequently, we would like to minimize the autocorrelation and maximize the cross-correlation. Here we show that we can achieve this with a third factor that switches on learning when the autocorrelation is minimal or zero and the cross-correlation is maximal. The biological counterpart of such a third factor is a neuromodulator that switches on learning at a certain moment in time. We show in a behavioral experiment that our three-factor learning clearly outperforms classical Hebbian learning.  相似文献   
93.
We present a powerful framework for 3D-texture-based rendering of multiple arbitrarily intersecting volumetric datasets. Each volume is represented by a multi-resolution octree-based structure and we use out-of-core techniques to support extremely large volumes. Users define a set of convex polyhedral volume lenses, which may be associated with one or more volumetric datasets. The volumes or the lenses can be interactively moved around while the region inside each lens is rendered using interactively defined multi-volume shaders. Our rendering pipeline splits each lens into multiple convex regions such that each region is homogenous and contains a fixed number of volumes. Each such region is further split by the brick boundaries of the associated octree representations. The resulting puzzle of lens fragments is sorted in front-to-back or back-to-front order using a combination of a view-dependent octree traversal and a GPU-based depth peeling technique. Our current implementation uses slice-based volume rendering and allows interactive roaming through multiple intersecting multi-gigabyte volumes.  相似文献   
94.
Topology provides a foundation for the development of mathematically sound tools for processing and exploration of scalar fields. Existing topology-based methods can be used to identify interesting features in volumetric data sets, to find seed sets for accelerated isosurface extraction, or to treat individual connected components as distinct entities for isosurfacing or interval volume rendering. We describe a framework for direct volume rendering based on segmenting a volume into regions of equivalent contour topology and applying separate transfer functions to each region. Each region corresponds to a branch of a hierarchical contour tree decomposition, and a separate transfer function can be defined for it. The novel contributions of our work are: 1) a volume rendering framework and interface where a unique transfer function can be assigned to each subvolume corresponding to a branch of the contour tree, 2) a runtime method for adjusting data values to reflect contour tree simplifications, 3) an efficient way of mapping a spatial location into the contour tree to determine the applicable transfer function, and 4) an algorithm for hardware-accelerated direct volume rendering that visualizes the contour tree-based segmentation at interactive frame rates using graphics processing units (GPUs) that support loops and conditional branches in fragment programs  相似文献   
95.
The Morse-Smale complex is an efficient representation of the gradient behavior of a scalar function, and critical points paired by the complex identify topological features and their importance. We present an algorithm that constructs the Morse-Smale complex in a series of sweeps through the data, identifying various components of the complex in a consistent manner. All components of the complex, both geometric and topological, are computed, providing a complete decomposition of the domain. Efficiency is maintained by representing the geometry of the complex in terms of point sets.  相似文献   
96.
Heart rate variability (HRV) represents the cardiovascular control mediated by the autonomic nervous system and other mechanisms. In the established task force HRV monitoring different cardiovascular control mechanisms can approximately be identified at typical frequencies of heart rate oscillations by power spectral analysis. HRV measures assessing complex and fractal behavior partly improved clinical risk stratification. However, their relationship to (patho-)physiology is not sufficiently explored. Objective of the present work is the introduction of complexity measures of different physiologically relevant time scales. This is achieved by a new concept of the autonomic information flow (AIF) analysis which was designed according to task force HRV. First applications show that different time scales of AIF improve the risk stratification of patients with multiple organ dysfunction syndrome and cardiac arrest patients in comparison to standard HRV. Each group's significant time scales correspond to their respective pathomechanisms.  相似文献   
97.
A precise calculation of the amount of intraalveolar fluid is the basis of a quantitative analysis of intraalveolar compounds. Different approaches have been made to cover this important problem. Here, we report a comparative study with five markers: 99mTc-DTPA, 51Cr-EDTA, inulin, urea, and methylene blue in animal experiments as well as in human experiments. The marker substances were added to the lavage fluid, and the "dilution" of the markers, i.e., the alveolar fluid, was calculated. The results showed that in animals with healthy lungs the tracer methods are able to calculate amounts of intraalveolar fluid that are comparable to morphologic findings. In animals as well as in humans, methylene blue and inulin were shown to be useless in determining alveolar fluid volume compared with the tracer methods. In humans, the calculations with the urea method and with Tc-DTPA were in the same magnitude, but there was no individual correlation. We conclude that, at present, the methods to quantitate alveolar fluid volume lack precision and add nothing to a deeper understanding of alveolar biology.  相似文献   
98.
This article explores the achievable transmission electron microscopy specimen thickness and quality by using three different preparation methods in the case of a high-strength nanocrystalline Cu-Nb powder alloy. Low specimen thickness is essential for spatially resolved analyses of the grains in nanocrystalline materials. We have found that single-sided as well as double-sided low-angle Ar ion milling of the Cu-Nb powders embedded into epoxy resin produced wedge-shaped particles of very low thickness (<10 nm) near the edge. By means of a modified focused ion beam lift-out technique generating holes in the lamella interior large micrometer-sized electron-transparent regions were obtained. However, this lamella displayed a higher thickness at the rim of ≥30 nm. Limiting factors for the observed thicknesses are discussed including ion damage depths, backscattering, and surface roughness, which depend on ion type, energy, current density, and specimen motion. Finally, sections cut by ultramicrotomy at low stroke rate and low set thickness offered vast, several tens of square micrometers uniformly thin regions of ~10-nm minimum thickness. As major drawbacks, we have detected a thin coating on the sections consisting of epoxy deployed as the embedding material and considerable nanoscale thickness variations.  相似文献   
99.
Reliable routing of packets in a Mobile Ad Hoc Network (MANET) has always been a major concern. The open medium and the susceptibility of the nodes of being fault-prone make the design of protocols for these networks a challenging task. The faults in these networks, which occur either due to the failure of nodes or due to reorganization, can eventuate to packet loss. Such losses degrade the performance of the routing protocols running on them. In this paper, we propose a routing algorithm, named as learning automata based fault-tolerant routing algorithm (LAFTRA), which is capable of routing in the presence of faulty nodes in MANETs using multipath routing. We have used the theory of Learning Automata (LA) for optimizing the selection of paths, reducing the overhead in the network, and for learning about the faulty nodes present in the network. The proposed algorithm can be juxtaposed to any existing routing protocol in a MANET. The results of simulation of our protocol using network simulator 2 (ns-2) shows the increase in packet delivery ratio and decrease in overhead compared to the existing protocols. The proposed protocol gains an edge over FTAR, E2FT by nearly 2% and by more than 10% when compared with AODV in terms of packet delivery ratio with nearly 30% faulty nodes in the network. The overhead generated by our protocol is lesser by 1% as compared to FTAR and by nearly 17% as compared to E2FT when there are nearly 30% faulty nodes.  相似文献   
100.
    
Zusammenfassung Traubenmoste aus dem Weinanbaugebiet der Rheinpfalz wurden auf ihren Patulingehalt untersucht. Nach Extraktion und Reinigung wurde Patulin mit Hilfe der Hochleistungsflilssig- und Dünnschichtchromatographie bestimmt. Zur Extraktion des Patulins erwies sich die Verwendung von Extrelutsdulen als vorteilhaft. In 62% der untersuchten Proben (55) war kein Patulin nachweisbar, 22% enthielten weniger als 50 g, 16% mehr als 50 g Patulin pro Liter. Durch praxisübliche Mostschwefelung (100 mg Kaliumpyrosulîit/1) und Vergärung mit Hefen der GattungSaccharomyces konnte vorhandenes Patulin entfernt werden.
Analysis of patulin in grape juices and wine
Summary Grape juices from the wine-growing region of the palatinate Rheinpfalz were analysed for patulin. After extraction and purification patulin was determined by high performance liquid chromatography and thin layer chromatography. The use of Extrelut-columns for the extraction of patulin was found advantageous. 62% of the analysed samples (55) were free of patulin, 22% contained less than 50 g/l, 16% more than 50 g/l. By addition of sulfur dioxide to the must and fermentation withSaccharomyces sp. patulin could be removed from the samples.
  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号