首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5190篇
  免费   74篇
  国内免费   6篇
电工技术   48篇
综合类   7篇
化学工业   797篇
金属工艺   94篇
机械仪表   61篇
建筑科学   272篇
矿业工程   56篇
能源动力   122篇
轻工业   408篇
水利工程   60篇
石油天然气   27篇
武器工业   1篇
无线电   287篇
一般工业技术   653篇
冶金工业   1848篇
原子能技术   35篇
自动化技术   494篇
  2022年   22篇
  2021年   49篇
  2020年   34篇
  2019年   45篇
  2018年   53篇
  2017年   39篇
  2016年   51篇
  2015年   26篇
  2014年   71篇
  2013年   277篇
  2012年   140篇
  2011年   225篇
  2010年   181篇
  2009年   148篇
  2008年   219篇
  2007年   196篇
  2006年   210篇
  2005年   211篇
  2004年   171篇
  2003年   138篇
  2002年   108篇
  2001年   83篇
  2000年   90篇
  1999年   129篇
  1998年   344篇
  1997年   204篇
  1996年   177篇
  1995年   127篇
  1994年   118篇
  1993年   110篇
  1992年   76篇
  1991年   57篇
  1990年   81篇
  1989年   82篇
  1988年   67篇
  1987年   63篇
  1986年   62篇
  1985年   66篇
  1984年   80篇
  1983年   51篇
  1982年   73篇
  1981年   66篇
  1980年   54篇
  1979年   44篇
  1978年   37篇
  1977年   45篇
  1976年   66篇
  1975年   28篇
  1974年   28篇
  1973年   27篇
排序方式: 共有5270条查询结果,搜索用时 31 毫秒
101.
Tests comparing image sets can play a critical role in PET research, providing a yes-no answer to the question "Are two image sets different?" The statistical goal is to determine how often observed differences would occur by chance alone. We examined randomization methods to provide several omnibus test for PET images and compared these tests with two currently used methods. In the first series of analyses, normally distributed image data were simulated fulfilling the requirements of standard statistical tests. These analyses generated power estimates and compared the various test statistics under optimal conditions. Varying whether the standard deviations were local or pooled estimates provided an assessment of a distinguishing feature between the SPM and Montreal methods. In a second series of analyses, we more closely simulated current PET acquisition and analysis techniques. Finally, PET images from normal subjects were used as an example of randomization. Randomization proved to be a highly flexible and powerful statistical procedure. Furthermore, the randomization test does not require extensive and unrealistic statistical assumptions made by standard procedures currently in use.  相似文献   
102.
Parabolic curves of evolving surfaces   总被引:2,自引:1,他引:1  
In this article we show how certain geometric structures which are also associated with a smooth surface evolve as the shape of the surface changes in a 1-parameter family. We concentrate on the parabolic set and its image under the Gauss map, but the same techniques also classify the changes in the dual of the surface. All these have significance for computer vision, for example through their connection with specularities and apparent contours. With the aid of our complete classification, which includes all the phenomena associated with multi-contact tangent planes as well as those associated with parabolic sets, we re-examine examples given by J. Koenderink in his book (1990) under the title of Morphological Scripts.We also explain some of the connections between parabolic sets and ridges of a surface, where principal curvatures achieve turning values along lines of curvature.The point of view taken is the analysis of the contact between surfaces and their tangent planes. A systematic investigation of this yields the results using singularity theory. The mathematical details are suppressed here and appear in Bruce et al. (1993).The third author was supported by the Esprit grant VIVA while this paper was in preparation.  相似文献   
103.
Odin the Allfather had in his service two great ravens. These ravens' names were Hugin (Thought) and Munin (Memory) and every morning at dawn they would fly off over Midgard (the world) in search of news and information to learn more about humans and their activities. At sundown, they would return to Odin where they would perch one on each of Odin's shoulders, and whisper into his ears all that they had seen and heard.Experience, stored in the brain as memory, is the raw material for intelligence and thought. It has been suggested that at sundown (i.e., during sleep) the brain adjusts its own synaptic matrix to enable adaptive responses to future events by a process of gradient descent optimization, involving repeated reactivations of recent and older memories and gradual adjustment of the synaptic weights. Memory retrieval, thought, and the generation of adaptive behavioral responses involve globally coordinated trajectories through the neuronal state-space, mediated by appropriate synaptic linkages. Artificial neural networks designed to implement even the most rudimentary forms of memory and knowledge extraction and adaptive behavior incorporate massively and symmetrically interconnected nodes; yet, in the cerebral cortex, the probability of a synaptic connection between any two arbitrarily chosen cells is on the order of 10−6, i.e., so close to zero that a naive modeler might neglect this parameter altogether. The probability of a symmetric connection is even smaller (10−12). How then, are thought and memory even possible? The solution appears to have been in the evolution of a modular, hierarchical cortical architecture, in which the modules are internally highly connected but only weakly interconnected with other modules. Appropriate inter-modular linkages are mediated indirectly via common linkages with higher level modules collectively known as association cortex. The hippocampal formation in the temporal lobe is the highest level of association cortex. It generates sequentially coupled patterns unique to the location and content of experience, but which do not contain the actual stored data. Rather, the patterns serve as pointers or ‘links’ to the data. Spontaneous reactivation of these linking patterns during sleep may enable the retrieval of recent sequences of experience stored in the lower levels of the cortex and the gradual extraction of knowledge from them. In this essay I explore these ideas, their implications, and the neuroscientific evidence for them.  相似文献   
104.
This paper reports on an aspect of the EC funded Argunaut project which researched and developed awareness tools for moderators of online dialogues. In this study we report on an investigation into the nature of creative thinking in online dialogues and whether or not this creative thinking can be coded for and recognized automatically such that moderators can be alerted when creative thinking is occurring or when it has not occurred after a period of time. We outline a dialogic theory of creativity, as the emergence of new perspectives from the interplay of voices, and the testing of this theory using a range of methods including a coding scheme which combined coding for creative thinking with more established codes for critical thinking, artificial intelligence pattern-matching techniques to see if our codes could be read automatically from maps and ‘key event recall’ interviews to explore the experience of participants. Our findings are that: (1) the emergence of new perspectives in a graphical dialogue map can be recognized by our coding scheme supported by a machine pattern-matching algorithm in a way that can be used to provide awareness indicators for moderators; (2) that the trigger events leading to the emergence of new perspectives in the online dialogues studied were most commonly disagreements and (3) the spatial representation of messages in a graphically mediated synchronous dialogue environment such as Digalo may offer more affordance for creativity than the much more common scrolling text chat environments. All these findings support the usefulness of our new account of creativity in online dialogues based on dialogic theory and demonstrate that this account can be operationalised through machine coding in a way that can be turned into alerts for moderators.  相似文献   
105.
Face detection is a widely studied topic in computer vision, and recent advances in algorithms, low cost processing, and CMOS imagers make it practical for embedded consumer applications. As with graphics, the best cost-performance ratio is achieved with dedicated hardware. In this paper, we design an embedded face detection system for handheld digital cameras or camera phones. The challenges of face detection in embedded environments include an efficient pipeline design, bandwidth constraints set by low cost memory, a need to find parallelism, and how to utilize the available hardware resources efficiently. In addition, consumer applications require reliability which calls for a hard real-time approach to guarantee that processing deadlines are met. Specifically, the main contributions of the paper include: (1) incorporation of a Genetic Algorithm in the AdaBoost training to optimize the detection performance given the number of Haar features; (2) a complexity control scheme to meet hard real-time deadlines; (3) a hardware pipeline design for Haar-like feature calculation and a system design exploiting several levels of parallelism. The proposed architecture is verified by synthesis to Altera’s low cost Cyclone II FPGA. Simulation results show the system can achieve about 75–80% detection rate for group portraits.  相似文献   
106.
Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30 m to 1 km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600 ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400 m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine-resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire landscape. Failure to account for wetlands had little impact on landscape-scale estimates, because vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.  相似文献   
107.
Epiphylls - lichens, fungi, liverworts, etc. infesting leaf surfaces - are found throughout humid forests of the world. It is well understood that epiphylls inhibit light interception by host plants, but their effect on remote sensing of colonized forests has not been examined. Incorporating leaf-level spectra from Terra Firme (primary forest) and Amazonian Caatinga (woodlands/forest growing on nutrient-deficient sandy soils), we used the GeoSAIL model to propagate leaf-level measurements to the canopy level and determine their effect on commonly used vegetation indices. In Caatinga, moderate infestations (50% leaf area epiphyll cover), lowered simulated Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) values by 6.1% and 20.4%, respectively, largely due to near infrared dampening. Heavy infestation (100% cover) simulations exhibited decreases 1.5-2 times greater than those of moderate infestations. For Terra Firme, which are generally less affected by epiphylls, moderate (20% leaf area) and heavy infestations (40%) lowered EVI by 4.4% (S.D. 0.8%) and 8.1% (S.D. 1.5%), respectively. Near infrared and green reflectance were most affected at the canopy level, showing mean decreases of 10.6% (S.D. 2.25%) and 9.5% (S.D. 3.49%), respectively, in heavy Terra Firme infestations. Time series of Moderate Resolution Imaging Spectrometer (MODIS) data corroborated the modeling results, suggesting a degree of coupling between epiphyll cover and the EVI and NDVI. These results suggest that, without explicit consideration of the presence of epiphylls, remote sensing-based methodologies may underestimate leaf area index, biomass and productivity in humid forests.  相似文献   
108.
Psychophysical research on text legibility has historically investigated factors such as size, colour and contrast, but there has been relatively little direct empirical evaluation of typographic design itself, particularly in the emerging context of glance reading. In the present study, participants performed a lexical decision task controlled by an adaptive staircase method. Two typefaces, a ‘humanist’ and ‘square grotesque’ style, were tested. Study I examined positive and negative polarities, while Study II examined two text sizes. Stimulus duration thresholds were sensitive to differences between typefaces, polarities and sizes. Typeface also interacted significantly with age, particularly for conditions with higher legibility thresholds. These results are consistent with previous research assessing the impact of the same typefaces on interface demand in a simulated driving environment. This simplified methodology of assessing legibility differences can be adapted to investigate a wide array of questions relevant to typographic and interface designs.

Practitioner Summary: A method is described for rapidly investigating relative legibility of different typographical features. Results indicate that during glance-like reading induced by the psychophysical technique and under the lighting conditions considered, humanist-style type is significantly more legible than a square grotesque style, and that black-on-white text is significantly more legible than white-on-black.  相似文献   

109.
There is limited research on trade-offs in demand between manual and voice interfaces of embedded and portable technologies. Mehler et al. identified differences in driving performance, visual engagement and workload between two contrasting embedded vehicle system designs (Chevrolet MyLink and Volvo Sensus). The current study extends this work by comparing these embedded systems with a smartphone (Samsung Galaxy S4). None of the voice interfaces eliminated visual demand. Relative to placing calls manually, both embedded voice interfaces resulted in less eyes-off-road time than the smartphone. Errors were most frequent when calling contacts using the smartphone. The smartphone and MyLink allowed addresses to be entered using compound voice commands resulting in shorter eyes-off-road time compared with the menu-based Sensus but with many more errors. Driving performance and physiological measures indicated increased demand when performing secondary tasks relative to ‘just driving’, but were not significantly different between the smartphone and embedded systems.

Practitioner Summary: The findings show that embedded system and portable device voice interfaces place fewer visual demands on the driver than manual interfaces, but they also underscore how differences in system designs can significantly affect not only the demands placed on drivers, but also the successful completion of tasks.  相似文献   

110.
The objective of this paper is to derive an algorithm for preserving important subscale morphologic characteristics at grids of lower-resolution, in particular for linear features such as canyons and ridge lines. The development of such an algorithm is necessitated by applications that require reduced spatial resolution, as is common in cartographic generalization, GIS applications, and geophysical modeling. Since any algorithm that results in weighted averages, including optimum interpolation and ordinary kriging, cannot reproduce correct depths, a new algorithm is designed based on principles of mathematical morphology. The algorithm described here is applied to derive a subglacial bed of the Greenland Ice Sheet that includes the trough of Jakobshavn Isbræ as a continuous canyon at correct depth in a low-resolution (5-km) digital elevation model (DEM). Data from recent airborne radar measurements of the elevation of the subglacial bed as part of the CReSIS project are utilized. The morphologic algorithm is designed with geophysical ice-sheet modeling in mind, in the following context. Currently occurring changes in the Earth's climate and the cryosphere cause changes in sea level, and the societal relevance of these natural processes motivates estimation of maximal sea-level rise in the medium-term future. The fast-moving outlet glaciers are more sensitive to climatic change than other parts of the Greenland ice sheet. Jakobshavn Isbrae, the fastest-moving ice stream in Greenland, follows a subglacial geologic trough. Since the existence of the trough causes the acceleration of the slow-moving inland ice in the Jakobshavn region and the formation of the ice stream, correct representation of the trough in a DEM is essential to model changes in the dynamics of the ice sheet and resultant sea-level predictions, even if current ice-sheet models can typically be run only at 5-km resolution. The DEM resultant from this study helps to bridge the conceptual gap between data analysis and geophysical modeling approaches. It is available as SeaRISE Greenland bed data set dev1.2 at http://websrv.cs.umt.edu/isis/index.php/SeaRISE_Assessment.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号