首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8496篇
  免费   465篇
  国内免费   14篇
电工技术   98篇
综合类   2篇
化学工业   2142篇
金属工艺   198篇
机械仪表   163篇
建筑科学   318篇
矿业工程   13篇
能源动力   276篇
轻工业   796篇
水利工程   87篇
石油天然气   29篇
武器工业   6篇
无线电   613篇
一般工业技术   1811篇
冶金工业   884篇
原子能技术   49篇
自动化技术   1490篇
  2023年   63篇
  2022年   147篇
  2021年   199篇
  2020年   183篇
  2019年   156篇
  2018年   208篇
  2017年   220篇
  2016年   279篇
  2015年   190篇
  2014年   312篇
  2013年   669篇
  2012年   512篇
  2011年   612篇
  2010年   410篇
  2009年   412篇
  2008年   555篇
  2007年   469篇
  2006年   401篇
  2005年   339篇
  2004年   317篇
  2003年   260篇
  2002年   218篇
  2001年   133篇
  2000年   117篇
  1999年   122篇
  1998年   129篇
  1997年   115篇
  1996年   116篇
  1995年   106篇
  1994年   115篇
  1993年   90篇
  1992年   68篇
  1991年   61篇
  1990年   54篇
  1989年   68篇
  1988年   48篇
  1987年   29篇
  1986年   33篇
  1985年   43篇
  1984年   43篇
  1983年   30篇
  1982年   30篇
  1981年   43篇
  1980年   27篇
  1979年   24篇
  1978年   26篇
  1977年   18篇
  1975年   14篇
  1974年   24篇
  1973年   16篇
排序方式: 共有8975条查询结果,搜索用时 15 毫秒
101.
The Quranic Arabic Corpus (http://corpus.quran.com) is a collaboratively constructed linguistic resource initiated at the University of Leeds, with multiple layers of annotation including part-of-speech tagging, morphological segmentation (Dukes and Habash 2010) and syntactic analysis using dependency grammar (Dukes and Buckwalter 2010). The motivation behind this work is to produce a resource that enables further analysis of the Quran, the 1,400 year-old central religious text of Islam. This project contrasts with other Arabic treebanks by providing a deep linguistic model based on the historical traditional grammar known as i′rāb (?????). By adapting this well-known canon of Quranic grammar into a familiar tagset, it is possible to encourage online annotation by Arabic linguists and Quranic experts. This article presents a new approach to linguistic annotation of an Arabic corpus: online supervised collaboration using a multi-stage approach. The different stages include automatic rule-based tagging, initial manual verification, and online supervised collaborative proofreading. A popular website attracting thousands of visitors per day, the Quranic Arabic Corpus has approximately 100 unpaid volunteer annotators each suggesting corrections to existing linguistic tagging. To ensure a high-quality resource, a small number of expert annotators are promoted to a supervisory role, allowing them to review or veto suggestions made by other collaborators. The Quran also benefits from a large body of existing historical grammatical analysis, which may be leveraged during this review. In this paper we evaluate and report on the effectiveness of the chosen annotation methodology. We also discuss the unique challenges of annotating Quranic Arabic online and describe the custom linguistic software used to aid collaborative annotation.  相似文献   
102.
Although the impacts of wetland loss are often felt at regional scales, effective planning and management require a comparative assessment of local needs, costs, and benefits. Satellite remote sensing can provide spatially explicit, synoptic land cover change information to support such an assessment. However, a common challenge in conventional remote sensing change detection is the difficulty of obtaining phenologically and radiometrically comparable data from the start and end of the time period of interest. An alternative approach is to use a prior land cover classification as a surrogate for historic satellite data and to examine the self-consistency of class spectral reflectances in recent imagery. We produced a 30-meter resolution wetland change probability map for the U.S. mid-Atlantic region by applying an outlier detection technique to a base classification provided by the National Wetlands Inventory (NWI). Outlier-resistant measures – the median and median absolute deviation – were used to represent spectral reflectance characteristics of wetland class populations, and formed the basis for the calculation of a pixel change likelihood index. The individual scene index values were merged into a consistent region-wide map and converted to pixel change probability using a logistic regression calibrated through interpretation of historic and recent aerial photography. The accuracy of a regional change/no-change map produced from the change probabilities was estimated at 89.6%, with a Kappa of 0.779. The change probabilities identify areas for closer inspection of change cause, impact, and mitigation potential. With additional work to resolve confusion resulting from natural spatial heterogeneity and variations in land use, automated updating of NWI maps and estimates of areal rates of wetland change may be possible. We also discuss extensions of the technique to address specific applications such as monitoring marsh degradation due to sea level rise and mapping of invasive species.  相似文献   
103.
We present a practical algorithm for sampling the product of environment map lighting and surface reflectance. Our method builds on wavelet‐based importance sampling, but has a number of important advantages over previous methods. Most importantly, we avoid using precomputed reflectance functions by sampling the BRDF on‐the‐fly. Hence, all types of materials can be handled, including anisotropic and spatially varying BRDFs, as well as procedural shaders. This also opens up for using very high resolution, uncompressed, environment maps. Our results show that this gives a significant reduction of variance compared to using lower resolution approximations. In addition, we study the wavelet product, and present a faster algorithm geared for sampling purposes. For our application, the computations are reduced to a simple quadtree‐based multiplication. We build the BRDF approximation and evaluate the product in a single tree traversal, which makes the algorithm both faster and more flexible than previous methods.  相似文献   
104.
Given a graph with edges colored Red and Blue, we study the problem of sampling and approximately counting the number of matchings with exactly k Red edges. We solve the problem of estimating the number of perfect matchings with exactly k Red edges for dense graphs. We study a Markov chain on the space of all matchings of a graph that favors matchings with k Red edges. We show that it is rapidly mixing using non-traditional canonical paths that can backtrack. We show that this chain can be used to sample matchings in the 2-dimensional toroidal lattice of any fixed size with k Red edges, where the horizontal edges are Red and the vertical edges are Blue. An extended abstract appeared in J.R. Correa, A. Hevia and M.A. Kiwi (eds.) Proceedings of the 7th Latin American Theoretical Informatics Symposium, LNCS 3887, pp. 190–201, Springer, 2006. N. Bhatnagar’s and D. Randall’s research was supported in part by NSF grants CCR-0515105 and DMS-0505505. V.V. Vazirani’s research was supported in part by NSF grants 0311541, 0220343 and CCR-0515186. N. Bhatnagar’s and E. Vigoda’s research was supported in part by NSF grant CCR-0455666.  相似文献   
105.
A review of smart homes- present state and future challenges   总被引:5,自引:0,他引:5  
In the era of information technology, the elderly and disabled can be monitored with numerous intelligent devices. Sensors can be implanted into their home for continuous mobility assistance and non-obtrusive disease prevention. Modern sensor-embedded houses, or smart houses, cannot only assist people with reduced physical functions but help resolve the social isolation they face. They are capable of providing assistance without limiting or disturbing the resident's daily routine, giving him or her greater comfort, pleasure, and well-being. This article presents an international selection of leading smart home projects, as well as the associated technologies of wearable/implantable monitoring systems and assistive robotics. The latter are often designed as components of the larger smart home environment. The paper will conclude by discussing future challenges of the domain.  相似文献   
106.
The timestamp problem captures a fundamental aspect of asynchronous distributed computing. It allows processes to label events throughout the system with timestamps that provide information about the real-time ordering of those events. We consider the space complexity of wait-free implementations of timestamps from shared read-write registers in a system of n processes. We prove an lower bound on the number of registers required. If the timestamps are elements of a nowhere dense set, for example the integers, we prove a stronger, and tight, lower bound of n. However, if timestamps are not from a nowhere dense set, this bound can be beaten: we give an implementation that uses n − 1 (single-writer) registers. We also consider the special case of anonymous implementations, where processes are programmed identically and do not have unique identifiers. In contrast to the general case, we prove anonymous timestamp implementations require n registers. We also give an implementation to prove that this lower bound is tight. This is the first anonymous timestamp implementation that uses a finite number of registers.  相似文献   
107.
Studies have demonstrated that students prefer PowerPoint and respond favorably to classes when it is used. Few studies have addressed the physical structure of PowerPoint. In this study, students enrolled in several psychology classes on two campuses completed a 36 item questionnaire regarding their preferences for the use of PowerPoint in the classroom. Students preferred the use of key phrase outlines, pictures and graphs, slides to be built line by line, sounds from popular media or that support the pictures or graphics on the slide, color backgrounds, and to have the lights dimmed. It is recommended that professors pay attention to the physical aspects of PowerPoint slides and handouts to further enhance students’ educational experience.  相似文献   
108.
The covering generalized rough sets are an improvement of traditional rough set model to deal with more complex practical problems which the traditional one cannot handle. It is well known that any generalization of traditional rough set theory should first have practical applied background and two important theoretical issues must be addressed. The first one is to present reasonable definitions of set approximations, and the second one is to develop reasonable algorithms for attributes reduct. The existing covering generalized rough sets, however, mainly pay attention to constructing approximation operators. The ideas of constructing lower approximations are similar but the ideas of constructing upper approximations are different and they all seem to be unreasonable. Furthermore, less effort has been put on the discussion of the applied background and the attributes reduct of covering generalized rough sets. In this paper we concentrate our discussion on the above two issues. We first discuss the applied background of covering generalized rough sets by proposing three kinds of datasets which the traditional rough sets cannot handle and improve the definition of upper approximation for covering generalized rough sets to make it more reasonable than the existing ones. Then we study the attributes reduct with covering generalized rough sets and present an algorithm by using discernibility matrix to compute all the attributes reducts with covering generalized rough sets. With these discussions we can set up a basic foundation of the covering generalized rough set theory and broaden its applications.  相似文献   
109.
We consider the problem of designing truthful mechanisms for scheduling n tasks on a set of m parallel related machines in order to minimize the makespan. In what follows, we consider that each task is owned by a selfish agent. This is a variant of the KP-model introduced by Koutsoupias and Papadimitriou (Proc. of STACS 1999, pp. 404–413, 1999) (and of the CKN-model of Christodoulou et al. in Proc. of ICALP 2004, pp. 345–357, 2004) in which the agents cannot choose the machine on which their tasks will be executed. This is done by a centralized authority, the scheduler. However, the agents may manipulate the scheduler by providing false information regarding the length of their tasks. We introduce the notion of increasing algorithm and a simple reduction that transforms any increasing algorithm into a truthful one. Furthermore, we show that some of the classical scheduling algorithms are indeed increasing: the LPT algorithm, the PTAS of Graham (SIAM J. Appl. Math. 17(2):416–429, 1969) in the case of two machines, as well as a simple PTAS for the case of m machines, with m a fixed constant. Our results yield a randomized r(1+ε)-approximation algorithm where r is the ratio between the largest and the smallest speed of the related machines. Furthermore, by combining our approach with the classical result of Shmoys et al. (SIAM J. Comput. 24(6):1313–1331, 1995), we obtain a randomized 2r(1+ε)-competitive algorithm. It has to be noticed that these results are obtained without payments, unlike most of the existing works in the field of Mechanism Design. Finally, we show that if payments are allowed then our approach gives a (1+ε)-algorithm for the off-line case with related machines.  相似文献   
110.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号