首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   154746篇
  免费   1972篇
  国内免费   674篇
电工技术   3167篇
综合类   203篇
化学工业   24712篇
金属工艺   5824篇
机械仪表   5007篇
建筑科学   4462篇
矿业工程   380篇
能源动力   4063篇
轻工业   17573篇
水利工程   1159篇
石油天然气   632篇
武器工业   5篇
无线电   20784篇
一般工业技术   29406篇
冶金工业   24194篇
原子能技术   2363篇
自动化技术   13458篇
  2019年   910篇
  2018年   1174篇
  2017年   1207篇
  2016年   1365篇
  2015年   1113篇
  2014年   1890篇
  2013年   6738篇
  2012年   3312篇
  2011年   4753篇
  2010年   3709篇
  2009年   4247篇
  2008年   4760篇
  2007年   5019篇
  2006年   4440篇
  2005年   4194篇
  2004年   4056篇
  2003年   3952篇
  2002年   3967篇
  2001年   4006篇
  2000年   3777篇
  1999年   3729篇
  1998年   6791篇
  1997年   5327篇
  1996年   4543篇
  1995年   3759篇
  1994年   3413篇
  1993年   3236篇
  1992年   2819篇
  1991年   2722篇
  1990年   2646篇
  1989年   2635篇
  1988年   2481篇
  1987年   2180篇
  1986年   2133篇
  1985年   2578篇
  1984年   2346篇
  1983年   2209篇
  1982年   2087篇
  1981年   2009篇
  1980年   1876篇
  1979年   1883篇
  1978年   1787篇
  1977年   2101篇
  1976年   2593篇
  1975年   1598篇
  1974年   1450篇
  1973年   1465篇
  1972年   1208篇
  1971年   1123篇
  1970年   955篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
951.
Locality-preserved maximum information projection.   总被引:3,自引:0,他引:3  
Dimensionality reduction is usually involved in the domains of artificial intelligence and machine learning. Linear projection of features is of particular interest for dimensionality reduction since it is simple to calculate and analytically analyze. In this paper, we propose an essentially linear projection technique, called locality-preserved maximum information projection (LPMIP), to identify the underlying manifold structure of a data set. LPMIP considers both the within-locality and the between-locality in the processing of manifold learning. Equivalently, the goal of LPMIP is to preserve the local structure while maximize the out-of-locality (global) information of the samples simultaneously. Different from principal component analysis (PCA) that aims to preserve the global information and locality-preserving projections (LPPs) that is in favor of preserving the local structure of the data set, LPMIP seeks a tradeoff between the global and local structures, which is adjusted by a parameter alpha, so as to find a subspace that detects the intrinsic manifold structure for classification tasks. Computationally, by constructing the adjacency matrix, LPMIP is formulated as an eigenvalue problem. LPMIP yields orthogonal basis functions, and completely avoids the singularity problem as it exists in LPP. Further, we develop an efficient and stable LPMIP/QR algorithm for implementing LPMIP, especially, on high-dimensional data set. Theoretical analysis shows that conventional linear projection methods such as (weighted) PCA, maximum margin criterion (MMC), linear discriminant analysis (LDA), and LPP could be derived from the LPMIP framework by setting different graph models and constraints. Extensive experiments on face, digit, and facial expression recognition show the effectiveness of the proposed LPMIP method.  相似文献   
952.
What is the impact of business process standardization on business process outsourcing (BPO) success? This paper argues that there is a direct impact of process standardization on BPO success, due to production cost economies, and also an indirect effect via improved contractual and relational governance resulting from better monitoring opportunities and facilitated communication and coordination. This threefold impact of standardization on BPO success is empirically confirmed using data from 335 BPO ventures in 215 German banks.  相似文献   
953.
Location information should be verifiable in order to support new computing and information services. In this paper, we adapt the classical challenge-response method for authentication to the task of verifying an entity's location. Our scheme utilizes a collection of transmitters, and adapts the power allocations across these transmitters to verify a user's claimed location. This strategy, which we call a power-modulated challenge response, is able to be used with existing. wireless sensor networks. First, we propose a direct method, where some transmitters are selected to send ldquochallengesrdquo that the claimant node should be able to witness based on its claimed location, and for which the claimant node must correctly respond to in order to prove its location. Second, we reverse the strategy by presenting an indirect method, where some transmitters send challenges that the claimant node should not be able to witness. Then, we present a signal-strength-based method, where the node responds with its received signal strength and thereby provides improved location verification. To evaluate our schemes, we examine different adversarial models for the claimant, and characterize the performance of our power-modulated challenge response schemes under these adversarial models. Further, we propose a new localization attack, where a set of nodes collaborates to pretend that there is a node at the claimed location. This collusion attack can do tremendous harm to localization and the performance of the aforementined methods under collusion attack are explained. Finally, we propose the use of a rotational directional power-modulated challenge response, where directional antennas are used to defend against collusion attacks.  相似文献   
954.
Polarized light imaging (PLI) is a method to image fiber orientation in gross histological brain sections based on the birefringent properties of the myelin sheaths. The method uses the transmission of polarized light to quantitatively estimate the fiber orientation and inclination angles at every point of the imaged section. Multiple sections can be assembled into a 3D volume, from which the 3D extent of fiber tracts can be extracted. This article describes the physical principles of PLI and describes two major applications of the method: the imaging of white matter orientation of the rat brain and the generation of fiber orientation maps of the human brain in white and gray matter. The strengths and weaknesses of the method are set out.  相似文献   
955.
Although real guardian angels aren't easy to get hold of, some of the computer technology needed for such a personal assistant is already available. Other parts exist in the form of research prototypes, but some technological breakthroughs are necessary before we can realize their potential, let alone integrate into our daily routines. Future VR and AR interfaces won't necessarily try to provide a perfect imitation of reality but instead will adapt their display mechanisms to their users' individual requirements. The emergence of these interfaces won't rely on a single technology but will depend on the advances in many areas, including computer graphics, display technology, tracking and recognition devices, natural and intuitive interactions, 3D interaction techniques, mobile and ubiquitous computing, intelligent agents, and conversational user interfaces, to name a few. The guardian angel scenario exemplifies how future developments in AR and VR user interfaces might change the way we interact with computers. Although this example is just one of several plausible scenarios, it demonstrates that AR and VP, in combination with user-centered design of their post-WIMP interfaces, can provide increased access, convenience, usability, and efficiency  相似文献   
956.
The objective of this work was to develop and test a semi-automated finite element mesh generation method using computed tomography (CT) image data of a canine radius. The present study employs a direct conversion from CT Hounsfield units to elastic moduli. Our method attempts to minimize user interaction and eliminate the need for mesh smoothing to produce a model suitable for finite element analysis. Validation of the computational model was conducted by loading the CT-imaged canine radius in four-point bending and using strain gages to record resultant strains that were then compared to strains calculated with the computational model. Geometry-based and uniform modulus voxel-based models were also constructed from the same imaging data set and compared. The nonuniform voxel-based model most accurately predicted the axial strain response of the sample bone (R(2)=0.9764).  相似文献   
957.
It is a well-known fact that Hebbian learning is inherently unstable because of its self-amplifying terms: the more a synapse grows, the stronger the postsynaptic activity, and therefore the faster the synaptic growth. This unwanted weight growth is driven by the autocorrelation term of Hebbian learning where the same synapse drives its own growth. On the other hand, the cross-correlation term performs actual learning where different inputs are correlated with each other. Consequently, we would like to minimize the autocorrelation and maximize the cross-correlation. Here we show that we can achieve this with a third factor that switches on learning when the autocorrelation is minimal or zero and the cross-correlation is maximal. The biological counterpart of such a third factor is a neuromodulator that switches on learning at a certain moment in time. We show in a behavioral experiment that our three-factor learning clearly outperforms classical Hebbian learning.  相似文献   
958.
We analyze generalization in XCSF and introduce three improvements. We begin by showing that the types of generalizations evolved by XCSF can be influenced by the input range. To explain these results we present a theoretical analysis of the convergence of classifier weights in XCSF which highlights a broader issue. In XCSF, because of the mathematical properties of the Widrow-Hoff update, the convergence of classifier weights in a given subspace can be slow when the spread of the eigenvalues of the autocorrelation matrix associated with each classifier is large. As a major consequence, the system's accuracy pressure may act before classifier weights are adequately updated, so that XCSF may evolve piecewise constant approximations, instead of the intended, and more efficient, piecewise linear ones. We propose three different ways to update classifier weights in XCSF so as to increase the generalization capabilities of XCSF: one based on a condition-based normalization of the inputs, one based on linear least squares, and one based on the recursive version of linear least squares. Through a series of experiments we show that while all three approaches significantly improve XCSF, least squares approaches appear to be best performing and most robust. Finally we show how XCSF can be extended to include polynomial approximations.  相似文献   
959.
960.
A reaction path including transition states is generated for the Silverman mechanism [R.B. Silverman, Chemical model studies for the mechanism of Vitamin K epoxide reductase, J. Am. Chem. Soc. 103 (1981) 5939-5941] of action for Vitamin K epoxide reductase (VKOR) using quantum mechanical methods (B3LYP/6-311G**). VKOR, an essential enzyme in mammalian systems, acts to convert Vitamin K epoxide, formed by Vitamin K carboxylase, to its (initial) quinone form for cellular reuse. This study elaborates on a prior work that focused on the thermodynamics of VKOR [D.W. Deerfield II, C.H. Davis, T. Wymore, D.W. Stafford, L.G. Pedersen, Int. J. Quant. Chem. 106 (2006) 2944-2952]. The geometries of proposed model intermediates and transition states in the mechanism are energy optimized. We find that once a key disulfide bond is broken, the reaction proceeds largely downhill. An important step in the conversion of the epoxide back to the quinone form involves initial protonation of the epoxide oxygen. We find that the source of this proton is likely a free mercapto group rather than a water molecule. The results are consistent with the current view that the widely used drug Warfarin likely acts by blocking binding of Vitamin K at the VKOR active site and thereby effectively blocking the initiating step. These results will be useful for designing more complete QM/MM studies of the enzymatic pathway once three-dimensional structural data is determined and available for VKOR.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号