首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7529篇
  免费   451篇
  国内免费   14篇
电工技术   81篇
综合类   2篇
化学工业   1839篇
金属工艺   174篇
机械仪表   145篇
建筑科学   279篇
矿业工程   13篇
能源动力   245篇
轻工业   718篇
水利工程   75篇
石油天然气   28篇
武器工业   6篇
无线电   556篇
一般工业技术   1571篇
冶金工业   844篇
原子能技术   43篇
自动化技术   1375篇
  2023年   54篇
  2022年   92篇
  2021年   163篇
  2020年   160篇
  2019年   134篇
  2018年   179篇
  2017年   197篇
  2016年   252篇
  2015年   167篇
  2014年   271篇
  2013年   604篇
  2012年   450篇
  2011年   545篇
  2010年   368篇
  2009年   370篇
  2008年   500篇
  2007年   425篇
  2006年   371篇
  2005年   304篇
  2004年   283篇
  2003年   241篇
  2002年   204篇
  2001年   126篇
  2000年   101篇
  1999年   113篇
  1998年   117篇
  1997年   104篇
  1996年   102篇
  1995年   92篇
  1994年   111篇
  1993年   79篇
  1992年   63篇
  1991年   51篇
  1990年   46篇
  1989年   63篇
  1988年   47篇
  1987年   24篇
  1986年   31篇
  1985年   40篇
  1984年   40篇
  1983年   26篇
  1982年   26篇
  1981年   41篇
  1980年   24篇
  1979年   22篇
  1978年   24篇
  1977年   14篇
  1975年   11篇
  1974年   21篇
  1973年   14篇
排序方式: 共有7994条查询结果,搜索用时 171 毫秒
131.
A high temperature Seebeck coefficient measurement apparatus with various features to minimize typical sources of error is designed and built. Common sources of temperature and voltage measurement error are described and principles to overcome these are proposed. With these guiding principles, a high temperature Seebeck measurement apparatus with a uniaxial 4-point contact geometry is designed to operate from room temperature to over 1200 K. This instrument design is simple to operate, and suitable for bulk samples with a broad range of physical types and shapes.  相似文献   
132.
133.
134.
The Quranic Arabic Corpus (http://corpus.quran.com) is a collaboratively constructed linguistic resource initiated at the University of Leeds, with multiple layers of annotation including part-of-speech tagging, morphological segmentation (Dukes and Habash 2010) and syntactic analysis using dependency grammar (Dukes and Buckwalter 2010). The motivation behind this work is to produce a resource that enables further analysis of the Quran, the 1,400 year-old central religious text of Islam. This project contrasts with other Arabic treebanks by providing a deep linguistic model based on the historical traditional grammar known as i′rāb (?????). By adapting this well-known canon of Quranic grammar into a familiar tagset, it is possible to encourage online annotation by Arabic linguists and Quranic experts. This article presents a new approach to linguistic annotation of an Arabic corpus: online supervised collaboration using a multi-stage approach. The different stages include automatic rule-based tagging, initial manual verification, and online supervised collaborative proofreading. A popular website attracting thousands of visitors per day, the Quranic Arabic Corpus has approximately 100 unpaid volunteer annotators each suggesting corrections to existing linguistic tagging. To ensure a high-quality resource, a small number of expert annotators are promoted to a supervisory role, allowing them to review or veto suggestions made by other collaborators. The Quran also benefits from a large body of existing historical grammatical analysis, which may be leveraged during this review. In this paper we evaluate and report on the effectiveness of the chosen annotation methodology. We also discuss the unique challenges of annotating Quranic Arabic online and describe the custom linguistic software used to aid collaborative annotation.  相似文献   
135.
Although the impacts of wetland loss are often felt at regional scales, effective planning and management require a comparative assessment of local needs, costs, and benefits. Satellite remote sensing can provide spatially explicit, synoptic land cover change information to support such an assessment. However, a common challenge in conventional remote sensing change detection is the difficulty of obtaining phenologically and radiometrically comparable data from the start and end of the time period of interest. An alternative approach is to use a prior land cover classification as a surrogate for historic satellite data and to examine the self-consistency of class spectral reflectances in recent imagery. We produced a 30-meter resolution wetland change probability map for the U.S. mid-Atlantic region by applying an outlier detection technique to a base classification provided by the National Wetlands Inventory (NWI). Outlier-resistant measures – the median and median absolute deviation – were used to represent spectral reflectance characteristics of wetland class populations, and formed the basis for the calculation of a pixel change likelihood index. The individual scene index values were merged into a consistent region-wide map and converted to pixel change probability using a logistic regression calibrated through interpretation of historic and recent aerial photography. The accuracy of a regional change/no-change map produced from the change probabilities was estimated at 89.6%, with a Kappa of 0.779. The change probabilities identify areas for closer inspection of change cause, impact, and mitigation potential. With additional work to resolve confusion resulting from natural spatial heterogeneity and variations in land use, automated updating of NWI maps and estimates of areal rates of wetland change may be possible. We also discuss extensions of the technique to address specific applications such as monitoring marsh degradation due to sea level rise and mapping of invasive species.  相似文献   
136.
Given a graph with edges colored Red and Blue, we study the problem of sampling and approximately counting the number of matchings with exactly k Red edges. We solve the problem of estimating the number of perfect matchings with exactly k Red edges for dense graphs. We study a Markov chain on the space of all matchings of a graph that favors matchings with k Red edges. We show that it is rapidly mixing using non-traditional canonical paths that can backtrack. We show that this chain can be used to sample matchings in the 2-dimensional toroidal lattice of any fixed size with k Red edges, where the horizontal edges are Red and the vertical edges are Blue. An extended abstract appeared in J.R. Correa, A. Hevia and M.A. Kiwi (eds.) Proceedings of the 7th Latin American Theoretical Informatics Symposium, LNCS 3887, pp. 190–201, Springer, 2006. N. Bhatnagar’s and D. Randall’s research was supported in part by NSF grants CCR-0515105 and DMS-0505505. V.V. Vazirani’s research was supported in part by NSF grants 0311541, 0220343 and CCR-0515186. N. Bhatnagar’s and E. Vigoda’s research was supported in part by NSF grant CCR-0455666.  相似文献   
137.
A review of smart homes- present state and future challenges   总被引:5,自引:0,他引:5  
In the era of information technology, the elderly and disabled can be monitored with numerous intelligent devices. Sensors can be implanted into their home for continuous mobility assistance and non-obtrusive disease prevention. Modern sensor-embedded houses, or smart houses, cannot only assist people with reduced physical functions but help resolve the social isolation they face. They are capable of providing assistance without limiting or disturbing the resident's daily routine, giving him or her greater comfort, pleasure, and well-being. This article presents an international selection of leading smart home projects, as well as the associated technologies of wearable/implantable monitoring systems and assistive robotics. The latter are often designed as components of the larger smart home environment. The paper will conclude by discussing future challenges of the domain.  相似文献   
138.
The timestamp problem captures a fundamental aspect of asynchronous distributed computing. It allows processes to label events throughout the system with timestamps that provide information about the real-time ordering of those events. We consider the space complexity of wait-free implementations of timestamps from shared read-write registers in a system of n processes. We prove an lower bound on the number of registers required. If the timestamps are elements of a nowhere dense set, for example the integers, we prove a stronger, and tight, lower bound of n. However, if timestamps are not from a nowhere dense set, this bound can be beaten: we give an implementation that uses n − 1 (single-writer) registers. We also consider the special case of anonymous implementations, where processes are programmed identically and do not have unique identifiers. In contrast to the general case, we prove anonymous timestamp implementations require n registers. We also give an implementation to prove that this lower bound is tight. This is the first anonymous timestamp implementation that uses a finite number of registers.  相似文献   
139.
Studies have demonstrated that students prefer PowerPoint and respond favorably to classes when it is used. Few studies have addressed the physical structure of PowerPoint. In this study, students enrolled in several psychology classes on two campuses completed a 36 item questionnaire regarding their preferences for the use of PowerPoint in the classroom. Students preferred the use of key phrase outlines, pictures and graphs, slides to be built line by line, sounds from popular media or that support the pictures or graphics on the slide, color backgrounds, and to have the lights dimmed. It is recommended that professors pay attention to the physical aspects of PowerPoint slides and handouts to further enhance students’ educational experience.  相似文献   
140.
The covering generalized rough sets are an improvement of traditional rough set model to deal with more complex practical problems which the traditional one cannot handle. It is well known that any generalization of traditional rough set theory should first have practical applied background and two important theoretical issues must be addressed. The first one is to present reasonable definitions of set approximations, and the second one is to develop reasonable algorithms for attributes reduct. The existing covering generalized rough sets, however, mainly pay attention to constructing approximation operators. The ideas of constructing lower approximations are similar but the ideas of constructing upper approximations are different and they all seem to be unreasonable. Furthermore, less effort has been put on the discussion of the applied background and the attributes reduct of covering generalized rough sets. In this paper we concentrate our discussion on the above two issues. We first discuss the applied background of covering generalized rough sets by proposing three kinds of datasets which the traditional rough sets cannot handle and improve the definition of upper approximation for covering generalized rough sets to make it more reasonable than the existing ones. Then we study the attributes reduct with covering generalized rough sets and present an algorithm by using discernibility matrix to compute all the attributes reducts with covering generalized rough sets. With these discussions we can set up a basic foundation of the covering generalized rough set theory and broaden its applications.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号