首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3332篇
  免费   151篇
  国内免费   7篇
电工技术   55篇
化学工业   663篇
金属工艺   43篇
机械仪表   70篇
建筑科学   187篇
矿业工程   16篇
能源动力   169篇
轻工业   301篇
水利工程   28篇
石油天然气   2篇
无线电   311篇
一般工业技术   632篇
冶金工业   246篇
原子能技术   20篇
自动化技术   747篇
  2024年   10篇
  2023年   23篇
  2022年   59篇
  2021年   75篇
  2020年   58篇
  2019年   71篇
  2018年   77篇
  2017年   89篇
  2016年   113篇
  2015年   84篇
  2014年   121篇
  2013年   222篇
  2012年   220篇
  2011年   278篇
  2010年   182篇
  2009年   187篇
  2008年   214篇
  2007年   172篇
  2006年   145篇
  2005年   129篇
  2004年   118篇
  2003年   97篇
  2002年   104篇
  2001年   62篇
  2000年   44篇
  1999年   46篇
  1998年   55篇
  1997年   55篇
  1996年   30篇
  1995年   38篇
  1994年   31篇
  1993年   12篇
  1992年   18篇
  1991年   23篇
  1990年   17篇
  1989年   10篇
  1988年   13篇
  1987年   9篇
  1986年   12篇
  1985年   13篇
  1984年   19篇
  1983年   17篇
  1982年   11篇
  1981年   11篇
  1980年   9篇
  1979年   12篇
  1978年   7篇
  1977年   11篇
  1976年   8篇
  1975年   12篇
排序方式: 共有3490条查询结果,搜索用时 15 毫秒
21.
22.
A heat balance reaction calorimeter was used to obtain information about the most informative process parameters in polymerizations carried out with Et[Ind]2ZrCl2-methylaluminoxane catalyst. The viscosity of the reaction mixture was found to increase dramatically during the homopolymerization of ethylene, but it could be controlled through appropriate selection of the reaction mixture medium. The mass transfer between the gas and liquid phases was the rate-determining step for the polymerization when the reaction mixture-based Reynolds number was below 2.500. The limited mass transfer between the gas and liquid phases was caused by the intensive activity of the metallocene catalyst and the increased viscosity of the reaction mixture.  相似文献   
23.
In the design of algorithms for large-scale applications it is essential to consider the problem of minimizing I/O communication. Geographical information systems (GIS) are good examples of such large-scale applications as they frequently handle huge amounts of spatial data. In this paper we develop efficient external-memory algorithms for a number of important problems involving line segments in the plane, including trapezoid decomposition, batched planar point location, triangulation, red--blue line segment intersection reporting, and general line segment intersection reporting. In GIS systems the first three problems are useful for rendering and modeling, and the latter two are frequently used for overlaying maps and extracting information from them.  相似文献   
24.
The boreal tree line is expected to advance upwards into the mountains and northwards into the tundra due to global warming. The major objective of this study was to find out if it is possible to use high-resolution airborne laser scanner data to detect very small trees — the pioneers that are pushing the tree line up into the mountains and out onto the tundra. The study was conducted in a sub-alpine/alpine environment in southeast Norway. A total of 342 small trees of Norway spruce, Scots pine, and downy birch with tree heights ranging from 0.11 to 5.20 m were precisely georeferenced and measured in field. Laser data were collected with a pulse density of 7.7 m− 2. Three different terrain models were used to process the airborne laser point cloud in order to assess the effects of different pre-processing parameters on small tree detection. Greater than 91% of all trees > 1 m tall registered positive laser height values regardless of terrain model. For smaller trees (< 1 m), positive height values were found in 5-73% of the cases, depending on the terrain model considered. For this group of trees, the highest rate of trees with positive height values was found for spruce. The more smoothed the terrain model was, the larger the portion of the trees that had positive laser height values. The accuracy of tree height derived from the laser data indicated a systematic underestimation of true tree height by 0.40 to 1.01 m. The standard deviation for the differences between laser-derived and field-measured tree heights was 0.11-0.73 m. Commission errors, i.e., the detection of terrain objects — rocks, hummocks — as trees, increased significantly as terrain smoothing increased. Thus, if no classification of objects into classes like small trees and terrain objects is possible, many non-tree objects with a positive height value cannot be separated from those actually being trees. In a monitoring context, i.e., repeated measurements over time, we argue that most other objects like terrain structures, rocks, and hummocks will remain stable over time while the trees will change as they grow and new trees are established. Thus, this study indicates that, given a high laser pulse density and a certain density of newly established trees, it would be possible to detect a sufficient portion of newly established trees over a 10 years period to claim that tree migration is taking place.  相似文献   
25.
A study compares two methods of reading text-based content on mobile phones: traditional scroll-based reading and Rapid Serial Visual Presentation, which displays words rapidly in a sequence. University students used a prototype called Feedo to test both methods, and their reading comprehension, efficiency, and preference ratings were measured. The results show that efficiency increases with fast RSVP, comprehension is equal, and preference rating is lower than with self-paced scroll.  相似文献   
26.
Comments on the article by J. L. Alpert et al (see record 2000-13581-002), which presented the report of the American Psychological Association Working Group on Investigation of Memories of Childhood Abuse. The authors discuss 4 issues in this commentary: (a) the assumptions and evidence used to support the case for dissociated and recovered memories (noting that the evidence is weak and the assumptions internally inconsistent as well as contradictory to a mass of experimental evidence about human memory); (b) the process by which dissociated memories are said to be recovered (events that were originally very poorly encoded as fragmentary, kinesthetic memories cannot be recovered with accuracy later); (c) 4 bodies of relevant, but neglected, research on human memory (reminiscence and hypermnesia, effectiveness of retrieval cues, priming in implicit memory tests, and intentional forgetting); and (d) the issue of appropriate research strategies to gain evidence on the thorny issues of long-delayed retrieval of information. Current evidence does not support the conclusion that memories of repeated abuse are dissociated and recovered with accuracy years later. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
27.
28.
In a world in which millions of people express their opinions about commercial products in blogs, wikis, fora, chats and social networks, the distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand or organization. Opinion mining for product positioning, in fact, is getting a more and more popular research field but the extraction of useful information from social media is not a simple task. In this work we merge AI and Semantic Web techniques to extract, encode and represent this unstructured information. In particular, we use Sentic Computing, a multi-disciplinary approach to opinion mining and sentiment analysis, to semantically and affectively analyze text and encode results in a semantic aware format according to different web ontologies. Eventually we represent this information as an interconnected knowledge base which is browsable through a multi-faceted classification website.  相似文献   
29.
To make media resources a prime citizen on the Web, we have to go beyond simply replicating digital media files. The Web is based on hyperlinks between Web resources, and that includes hyperlinking out of resources (e.g., from a word or an image within a Web page) as well as hyperlinking into resources (e.g., fragment URIs into Web pages). To turn video and audio into hypervideo and hyperaudio, we need to enable hyperlinking into and out of them. The W3C Media Fragments Working Group is taking on the challenge to further embrace W3C??s mission to lead the World Wide Web to its full potential by developing a Media Fragment protocol and guidelines that ensure the long-term growth of the Web. The major contribution of this paper is the introduction of Media Fragments as a media-format independent, standard means of addressing media resources using URIs. Moreover, we explain how the HTTP protocol can be used and extended to serve Media Fragments and what the impact is for current Web-enabled media formats.  相似文献   
30.
We have in previous studies reported our findings and concern about the reliability and validity of the evaluation procedures used in comparative studies on competing effort prediction models. In particular, we have raised concerns about the use of accuracy statistics to rank and select models. Our concern is strengthened by the observed lack of consistent findings. This study offers more insights into the causes of conclusion instability by elaborating on the findings of our previous work concerning the reliability and validity of the evaluation procedures. We show that model selection based on the accuracy statistics MMRE, MMER, MBRE, and MIBRE contribute to conclusion instability as well as selection of inferior models. We argue and show that the evaluation procedure must include an evaluation of whether the functional form of the prediction model makes sense to better prevent selection of inferior models.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号