首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4400篇
  免费   107篇
  国内免费   5篇
电工技术   44篇
综合类   1篇
化学工业   846篇
金属工艺   79篇
机械仪表   76篇
建筑科学   191篇
矿业工程   23篇
能源动力   85篇
轻工业   396篇
水利工程   27篇
石油天然气   18篇
无线电   263篇
一般工业技术   660篇
冶金工业   1108篇
原子能技术   53篇
自动化技术   642篇
  2021年   43篇
  2020年   32篇
  2019年   38篇
  2018年   39篇
  2017年   40篇
  2016年   57篇
  2015年   51篇
  2014年   77篇
  2013年   247篇
  2012年   142篇
  2011年   171篇
  2010年   150篇
  2009年   170篇
  2008年   197篇
  2007年   194篇
  2006年   181篇
  2005年   134篇
  2004年   125篇
  2003年   109篇
  2002年   126篇
  2001年   94篇
  2000年   88篇
  1999年   78篇
  1998年   70篇
  1997年   75篇
  1996年   78篇
  1995年   82篇
  1994年   76篇
  1993年   76篇
  1992年   90篇
  1991年   33篇
  1990年   70篇
  1989年   56篇
  1988年   59篇
  1987年   67篇
  1986年   68篇
  1985年   92篇
  1984年   86篇
  1983年   83篇
  1982年   84篇
  1981年   74篇
  1980年   45篇
  1979年   61篇
  1978年   61篇
  1977年   55篇
  1976年   54篇
  1975年   65篇
  1974年   39篇
  1973年   40篇
  1972年   32篇
排序方式: 共有4512条查询结果,搜索用时 15 毫秒
91.
The scientific method has been characterized as having two distinct components, Discovery and Justification. Discovery emphasizes ideas and creativity, focuses on conceiving hypotheses and constructing models, and is generally regarded as lacking a formal logic. Justification begins with the hypotheses and models and ends with a valid scientific inference. Unlike Discovery, Justification has a formal logic whose rules must be rigorously followed to produce valid scientific inferences. In particular, when inferences are based on sample data, the rules of the logic of Justification require assessments of bias and precision. Thus, satellite image-based maps that lack such assessments for parameters of populations depicted by the maps may be of little utility for scientific inference; essentially, they may be just pretty pictures. Probability- and model-based approaches are explained, illustrated, and compared for producing inferences for population parameters using a map depicting three land cover classes: non-forest, coniferous forest, and deciduous forest. The maps were constructed using forest inventory data and Landsat imagery. Although a multinomial logistic regression model was used to classify the imagery, the methods for assessing bias and precision can be used with any classification method. For probability-based approaches, the difference estimator was used, and for model-based inference, a bootstrap approach was used.  相似文献   
92.
Developing augmented reality (AR) applications for mobile devices and outdoor environments has historically required a number of technical trade-offs related to tracking. One approach is to rely on computer vision which provides very accurate tracking, but can be brittle, and limits the generality of the application. Another approach is to rely on sensor-based tracking which enables widespread use, but at the cost of generally poor tracking performance. In this paper we present and evaluate a new approach, which we call Indirect AR, that enables perfect alignment of virtual content in a much greater number of application scenarios.To achieve this improved performance we replace the live camera view used in video see through AR with a previously captured panoramic image. By doing this we improve the perceived quality of the tracking while still maintaining a similar overall experience. There are some limitations of this technique, however, related to the use of panoramas. We evaluate these boundaries conditions on both a performance and experiential basis through two user studies. The result of these studies indicates that users preferred Indirect AR over traditional AR in most conditions, and when conditions do degrade to the point the experience changes, Indirect AR can still be a very useful tool in many outdoor application scenarios.  相似文献   
93.
Vector fields are a common concept for the representation of many different kinds of flow phenomena in science and engineering. Methods based on vector field topology are known for their convenience for visualizing and analysing steady flows, but a counterpart for unsteady flows is still missing. However, a lot of good and relevant work aiming at such a solution is available. We give an overview of previous research leading towards topology‐based and topology‐inspired visualization of unsteady flow, pointing out the different approaches and methodologies involved as well as their relation to each other, taking classical (i.e. steady) vector field topology as our starting point. Particularly, we focus on Lagrangian methods, space–time domain approaches, local methods and stochastic and multifield approaches. Furthermore, we illustrate our review with practical examples for the different approaches.  相似文献   
94.
95.
Abstract.  Information technology (IT) innovation research examines the organizational and technological factors that determine IT adoption and diffusion, including firm size and scope, technological competency and expected benefits. We extend the literature by focusing on information requirements as a driver of IT innovation adoption and diffusion. Our framework of IT innovation diffusion incorporates three industry-level sources of information requirements: process complexity, clock speed and supply chain complexity. We apply the framework to US manufacturing industries using aggregate data of internet-based innovations and qualitative analysis of two industries: wood products and beverage manufacturing. Results show systematic patterns supporting the basic thesis of the information processing paradigm: higher IT innovation diffusion in industries with higher information processing requirements; the salience of downstream industry structure in the adoption of interorganizational systems; and the role of the location of information intensity in the supply chain in determining IT adoption and diffusion. Our study provides a new explanation for why certain industries were early and deep adopters of internet-based innovations while others were not: variation in information processing requirements.  相似文献   
96.
A pervasive task in many forms of human activity is classification. Recent interest in the classification process has focused on ensemble classifier systems. These types of systems are based on a paradigm of combining the outputs of a number of individual classifiers. In this paper we propose a new approach for obtaining the final output of ensemble classifiers. The method presented here uses the Dempster–Shafer concept of belief functions to represent the confidence in the outputs of the individual classifiers. The combing of the outputs of the individual classifiers is based on an aggregation process which can be seen as a fusion of the Dempster rule of combination with a generalized form of OWA operator. The use of the OWA operator provides an added degree of flexibility in expressing the way the aggregation of the individual classifiers is performed.  相似文献   
97.
The rapidly growing field of neuroproteomics has expanded to track global proteomic changes underlying various neurological conditions such as traumatic brain injury (TBI), stroke, and Alzheimer's disease. TBI remains a major health problem with approximately 2?million incidents occurring annually in the United States, yet no affective treatment is available despite several clinical trials. The absence of brain injury diagnostic biomarkers was identified as a significant road-block to therapeutic development for brain injury. Recently, the field of neuroproteomics has undertaken major advances in the area of neurotrauma research, where several candidate markers have been identified and are being evaluated for their efficacy as biological biomarkers in the field of TBI. One scope of this review is to evaluate the current status of TBI biomarker discovery using neuroproteomics techniques, and at what stage we are at in their clinical validation. In addition, we will discuss the need for strengthening the role of systems biology and its application to the field of neuroproteomics due to its integral role in establishing a comprehensive understanding of specific brain disorder and brain function in general. Finally, to achieve true clinical input of these neuroproteomic findings, these putative biomarkers should be validated using preclinical and clinical samples and linked to clinical diagnostic assays including ELISA or other high-throughput assays.  相似文献   
98.
A novel fiber-optic confocal approach for ultrahigh depth-resolution (相似文献   
99.
100.
It is known that nondeterministic polynomial time truth-table reducibility is exactly the same as nondeterministic polynomial time Turing reducibility. Here we study the standard nondeterministic reducibilities (conjunctive, bounded truth-table, bounded positive truth-table, and many-one) and show that each is a restriction of nondeterministic polynomial time Turing reducibility corresponding to acceptance modulo a set of oracle conditions. Then we show that the reduction classes of these reducibilities are classes of formal languages and as such have language theoretic characterization theorems. The same program is carried out for polynomial space.This research was supported in part by the National Science Foundation under Grants MCS77-23493, MSC80-11979, MCS81-20263, and MCS83-12472. The work of the second author was also supported by the United States-Israel Educational Foundation (Fulbright Award).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号