首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   151862篇
  免费   1886篇
  国内免费   656篇
电工技术   3115篇
综合类   187篇
化学工业   24072篇
金属工艺   5781篇
机械仪表   4943篇
建筑科学   4418篇
矿业工程   377篇
能源动力   3928篇
轻工业   17306篇
水利工程   1140篇
石油天然气   618篇
武器工业   5篇
无线电   20493篇
一般工业技术   28844篇
冶金工业   23754篇
原子能技术   2341篇
自动化技术   13082篇
  2019年   834篇
  2018年   1081篇
  2017年   1132篇
  2016年   1268篇
  2015年   1063篇
  2014年   1796篇
  2013年   6575篇
  2012年   3193篇
  2011年   4608篇
  2010年   3594篇
  2009年   4151篇
  2008年   4647篇
  2007年   4913篇
  2006年   4350篇
  2005年   4108篇
  2004年   4002篇
  2003年   3899篇
  2002年   3928篇
  2001年   3980篇
  2000年   3741篇
  1999年   3691篇
  1998年   6632篇
  1997年   5227篇
  1996年   4460篇
  1995年   3709篇
  1994年   3358篇
  1993年   3184篇
  1992年   2783篇
  1991年   2690篇
  1990年   2628篇
  1989年   2614篇
  1988年   2459篇
  1987年   2166篇
  1986年   2115篇
  1985年   2556篇
  1984年   2316篇
  1983年   2195篇
  1982年   2068篇
  1981年   1991篇
  1980年   1860篇
  1979年   1874篇
  1978年   1770篇
  1977年   2085篇
  1976年   2562篇
  1975年   1583篇
  1974年   1430篇
  1973年   1453篇
  1972年   1196篇
  1971年   1115篇
  1970年   948篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
It is a well-known fact that Hebbian learning is inherently unstable because of its self-amplifying terms: the more a synapse grows, the stronger the postsynaptic activity, and therefore the faster the synaptic growth. This unwanted weight growth is driven by the autocorrelation term of Hebbian learning where the same synapse drives its own growth. On the other hand, the cross-correlation term performs actual learning where different inputs are correlated with each other. Consequently, we would like to minimize the autocorrelation and maximize the cross-correlation. Here we show that we can achieve this with a third factor that switches on learning when the autocorrelation is minimal or zero and the cross-correlation is maximal. The biological counterpart of such a third factor is a neuromodulator that switches on learning at a certain moment in time. We show in a behavioral experiment that our three-factor learning clearly outperforms classical Hebbian learning.  相似文献   
992.
We analyze generalization in XCSF and introduce three improvements. We begin by showing that the types of generalizations evolved by XCSF can be influenced by the input range. To explain these results we present a theoretical analysis of the convergence of classifier weights in XCSF which highlights a broader issue. In XCSF, because of the mathematical properties of the Widrow-Hoff update, the convergence of classifier weights in a given subspace can be slow when the spread of the eigenvalues of the autocorrelation matrix associated with each classifier is large. As a major consequence, the system's accuracy pressure may act before classifier weights are adequately updated, so that XCSF may evolve piecewise constant approximations, instead of the intended, and more efficient, piecewise linear ones. We propose three different ways to update classifier weights in XCSF so as to increase the generalization capabilities of XCSF: one based on a condition-based normalization of the inputs, one based on linear least squares, and one based on the recursive version of linear least squares. Through a series of experiments we show that while all three approaches significantly improve XCSF, least squares approaches appear to be best performing and most robust. Finally we show how XCSF can be extended to include polynomial approximations.  相似文献   
993.
994.
A reaction path including transition states is generated for the Silverman mechanism [R.B. Silverman, Chemical model studies for the mechanism of Vitamin K epoxide reductase, J. Am. Chem. Soc. 103 (1981) 5939-5941] of action for Vitamin K epoxide reductase (VKOR) using quantum mechanical methods (B3LYP/6-311G**). VKOR, an essential enzyme in mammalian systems, acts to convert Vitamin K epoxide, formed by Vitamin K carboxylase, to its (initial) quinone form for cellular reuse. This study elaborates on a prior work that focused on the thermodynamics of VKOR [D.W. Deerfield II, C.H. Davis, T. Wymore, D.W. Stafford, L.G. Pedersen, Int. J. Quant. Chem. 106 (2006) 2944-2952]. The geometries of proposed model intermediates and transition states in the mechanism are energy optimized. We find that once a key disulfide bond is broken, the reaction proceeds largely downhill. An important step in the conversion of the epoxide back to the quinone form involves initial protonation of the epoxide oxygen. We find that the source of this proton is likely a free mercapto group rather than a water molecule. The results are consistent with the current view that the widely used drug Warfarin likely acts by blocking binding of Vitamin K at the VKOR active site and thereby effectively blocking the initiating step. These results will be useful for designing more complete QM/MM studies of the enzymatic pathway once three-dimensional structural data is determined and available for VKOR.  相似文献   
995.
This paper reviews the way in which teeth damaged by caries may be repaired clinically. The mechanical effects of caries are described, as are the materials available to repair the damage caused by this disease. Studies are reported which have shown that caries reduces the compressive strength of the tooth to less than 50 per cent of its original value and that, by use of appropriate materials and placement techniques, this can be restored to some 80 per cent of this value. However, very few studies have been carried out which view tooth repair from an engineering perspective. Instead, emphasis is placed on determining clinical durability of repairs. This is related to repair strength but brings in other factors, such as the oral hygiene of the patient. Despite this complication, durability studies show that modern restorative materials perform well under clinical conditions, from which it may be concluded that the repair process allows a structure to be fabricated that is essentially sound from an engineering viewpoint, even if inferior to the original tooth structure provided by nature.  相似文献   
996.
W. Hackbusch 《Computing》2006,78(2):145-159
The solution of population balance equations is a function f(t,r,x) describing a population density of particles of the property x at time t and space r. For instance, the additional independent variable x may denote the particle size. The describing partial differential equation contains additional sink and source terms involving integral operators. Since the coordinate x adds at least one further dimension to the spatial directions and time coordinate, an efficient numerical treatment of the integral terms is crucial. One of the more involved integral terms appearing in population balance models is the coalescence integral, which is of the form 0 x κ(x–y, y) f(y) f(x–y)dy. In this paper, we describe an evaluation method of this integral which needs only operations, where n is the number of degrees of freedom with respect to the variable x. This cost can also be obtained in the case of a grid geometrically refined towards x=0.  相似文献   
997.
The efficiency of neuronal encoding in sensory and motor systems has been proposed as a first principle governing response properties within the central nervous system. We present a continuation of a theoretical study presented by Zhang and Sejnowski, where the influence of neuronal tuning properties on encoding accuracy is analyzed using information theory. When a finite stimulus space is considered, we show that the encoding accuracy improves with narrow tuning for one- and two-dimensional stimuli. For three dimensions and higher, there is an optimal tuning width.  相似文献   
998.
S.  W. 《Computer aided design》2001,33(14):1091-1109
This paper presents a new layer-based technique for automatic high-level segmentation of 3-D surface contours into individual surface features through motif analysis. The procedure starts from a contour-based surface model representing a composite surface area of an object. For each of the surface contours, a relative turning angle (RTA) map is derived. The RTA map usually contains noise and minor features. Algorithms based on motif analysis are applied for extracting a main profile of the RTA map free from background noise and other minor features. All feature points on the extracted profile are further identified from the extracted main profile through further motif analysis. The original contour is thus partitioned into individual segments with the identified feature points. A collection of consecutive contour segments among different layers form an individual 3-D surface feature of the original composite surface. The developed approach using motif analysis is particularly useful for the identification of smooth joins between individual surface features and for the elimination of superposed noise and unwanted minor features.  相似文献   
999.
A simple strategy for calibrating the geometry of light sources   总被引:1,自引:0,他引:1  
We present a methodology for calibrating multiple light source locations in 3D from images. The procedure involves the use of a novel calibration object that consists of three spheres at known relative positions. The process uses intensity images to find the positions of the light sources. We conducted experiments to locate light sources in 51 different positions in a laboratory setting. Our data shows that the vector from a point in the scene to a light source can be measured to within 2.7±4° at α=.05 (6 percent relative) of its true direction and within 0.13±.02 m at α=.05 (9 percent relative) of its true magnitude compared to empirically measured ground truth. Finally, we demonstrate how light source information is used for color correction  相似文献   
1000.
Recently, High Performance Computing (HPC) platforms have been employed to realize many computationally demanding applications in signal and image processing. These applications require real-time performance constraints to be met. These constraints include latency as well as throughput. In order to meet these performance requirements, efficient parallel algorithms are needed. These algorithms must be engineered to exploit the computational characteristics of such applications. In this paper we present a methodology for mapping a class of adaptive signal processing applications onto HPC platforms such that the throughput performance is optimized. We first define a new task model using the salient computational characteristics of a class of adaptive signal processing applications. Based on this task model, we propose a new execution model. In the earlier linear pipelined execution model, the task mapping choices were restricted. The new model permits flexible task mapping choices, leading to improved throughput performance compared with the previous model. Using the new model, a three-step task mapping methodology is developed. It consists of (1) a data remapping step, (2) a coarse resource allocation step, and (3) a fine performance tuning step. The methodology is demonstrated by designing parallel algorithms for modern radar and sonar signal processing applications. These are implemented on IBM SP2 and Cray T3E, state-of-the-art HPC platforms, to show the effectiveness of our approach. Experimental results show significant performance improvement over those obtained by previous approaches. Our code is written using C and the Message Passing Interface (MPI). Thus, it is portable across various HPC platforms. Received April 8, 1998; revised February 2, 1999.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号