首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2959篇
  免费   176篇
  国内免费   7篇
电工技术   76篇
综合类   3篇
化学工业   571篇
金属工艺   56篇
机械仪表   35篇
建筑科学   106篇
矿业工程   4篇
能源动力   90篇
轻工业   228篇
水利工程   21篇
石油天然气   5篇
无线电   296篇
一般工业技术   608篇
冶金工业   613篇
原子能技术   19篇
自动化技术   411篇
  2023年   48篇
  2022年   68篇
  2021年   113篇
  2020年   83篇
  2019年   88篇
  2018年   88篇
  2017年   75篇
  2016年   104篇
  2015年   84篇
  2014年   100篇
  2013年   164篇
  2012年   194篇
  2011年   226篇
  2010年   161篇
  2009年   125篇
  2008年   164篇
  2007年   140篇
  2006年   100篇
  2005年   83篇
  2004年   73篇
  2003年   61篇
  2002年   46篇
  2001年   39篇
  2000年   31篇
  1999年   38篇
  1998年   101篇
  1997年   74篇
  1996年   42篇
  1995年   45篇
  1994年   26篇
  1993年   27篇
  1992年   10篇
  1991年   20篇
  1990年   17篇
  1989年   11篇
  1988年   9篇
  1987年   18篇
  1986年   11篇
  1985年   10篇
  1984年   13篇
  1983年   14篇
  1982年   12篇
  1981年   15篇
  1980年   14篇
  1978年   13篇
  1977年   16篇
  1976年   25篇
  1974年   12篇
  1973年   9篇
  1972年   8篇
排序方式: 共有3142条查询结果,搜索用时 15 毫秒
51.
52.
53.
We explore the impact of edge states in three types of transition metal dichalcogenides (TMDs), namely metallic Td-phase WTe2 and semiconducting 2H-phase MoTe2 and MoS2, by patterning thin flakes into ribbons with varying channel widths. No obvious charge depletion at the edges is observed for any of these three materials, in contrast to observations made for graphene nanoribbon devices. The semiconducting ribbons are characterized in a three-terminal field-effect transistor (FET) geometry. In addition, two ribbon array designs have been carefully investigated and found to exhibit current levels higher than those observed for conventional one-channel devices. Our results suggest that device structures incorporating a high number of edges can improve the performance of TMD FETs. This improvement is attributed to a higher local electric field, resulting from the edges, increasing the effective number of charge carriers, and the absence of any detrimental edge-related scattering.
  相似文献   
54.
The speed profiles of arm movements display a number of regularities, including bell-shaped speed profiles in straight reaching movements and an inverse relationship between speed and curvature in extemporaneous drawing movements (described as a 2/3 power law). Here we propose a new model that simultaneously accounts for both regularities by replacing the 2/3 power law with a smoothness constraint. For a given path of the hand in space, our model assumes that the speed profile will be the one that minimizes the third derivative of position (or "jerk"). Analysis of the mathematical relationship between this smoothness constraint and the 2/3 power law revealed that in both two and three dimensions, the power law is equivalent to setting the jerk along the normal to the path to zero; it generates speed predictions that are similar, but clearly distinguishable from the predictions of our model. We have assessed the accuracy of the model on a number of motor tasks in two and three dimensions, involving discrete movements along arbitrary paths, traced with different limb segments. The new model provides a very close fit to the observed speed profiles in all cases. Its performance is uniformly better compared with all existing versions of the 2/3 power law, suggesting that the correlation between speed and curvature may be a consequence of an underlying motor strategy to produce smooth movements. Our results indicate that the relationship between the path and the speed profile of a complex arm movement is stronger than previously thought, especially within a single trial. The accuracy of the model was quite uniform over movements of different shape, size, and average speed. We did not find evidence for segmentation, yet prediction error increased with movement duration, suggesting a continuous fluctuation of the "tempo" of discrete movements. The implications of these findings for motor planning and on-line control are discussed.  相似文献   
55.
Jordan DL  Lewis GD  Jakeman E 《Applied optics》1996,35(19):3583-3590
Ellipsometer measurements of the effective complex refractive index at a wavelength of 10.6 μm are made on a series of glass and aluminum surfaces of increasing surface roughness. The measured values are then used to calculate the degree of emission polarization and are shown to be in agreement with the experimentally determined values when depolarization is small. Comparisons are also made with calculations based on the Kirchhoff scattering theory. Both the theory and the experimental results indicate that it is the local surface slope and not the roughness magnitude that is the prime factor in determining the degree of emission polarization from the samples studied.  相似文献   
56.
Instructors in higher education are disseminating instructional content via podcasting, as many rally behind the technology’s potential benefits. Others have expressed concern about the risks of deleterious effects that might accompany the adoption of podcasting, such as lower class attendance. Yet, relatively few studies have investigated students’ perceptions of podcasting for educational purposes, especially in relation to different podcasting forms: repetitive and supplemental. The present study explored students’ readiness and attitudes towards these two forms of podcasting to provide fundamental information for future researchers and educators. The results indicated that students may not be as ready or eager to use podcasting for repetitive or supplemental educational purposes as much as we think they are, but they could be persuaded.  相似文献   
57.
Privacy policies for shared content in social network sites   总被引:1,自引:0,他引:1  
Social networking is one of the major technological phenomena of the Web 2.0, with hundreds of millions of subscribed users. Social networks enable a form of self-expression for users and help them to socialize and share content with other users. In spite of the fact that content sharing represents one of the prominent features of existing Social network sites, they do not provide any mechanisms for collective management of privacy settings for shared content. In this paper, using game theory, we model the problem of collective enforcement of privacy policies on shared data. In particular, we propose a solution that offers automated ways to share images based on an extended notion of content ownership. Building upon the Clarke-Tax mechanism, we describe a simple mechanism that promotes truthfulness and that rewards users who promote co-ownership. Our approach enables social network users to compose friendship based policies based on distances from an agreed upon central user selected using several social networks metrics. We integrate our design with inference techniques that free the users from the burden of manually selecting privacy preferences for each picture. To the best of our knowledge, this is the first time such a privacy protection mechanism for social networking has been proposed. We also extend our mechanism so as to support collective enforcement across multiple social network sites. In the paper, we also show a proof-of-concept application, which we implemented in the context of Facebook, one of today’s most popular social networks. Through our implementation, we show the feasibility of such approach and show that it can be implemented with a minimal increase in overhead to end-users. We complete our analysis by conducting a user study to investigate users’ understanding of co-ownership, usefulness and understanding of our approach. Users responded favorably to the approach, indicating a general understanding of co-ownership and the auction, and found the approach to be both useful and fair.  相似文献   
58.
59.
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA® 8800gt graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer.

Program summary

Program title: Phoogle-C/Phoogle-GCatalogue identifier: AEEB_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 51 264No. of bytes in distributed program, including test data, etc.: 2 238 805Distribution format: tar.gzProgramming language: C++Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1Operating system: Windows XPHas the code been vectorised or parallelized?: Phoogle-G is written for SIMD architecturesRAM: 1 GBClassification: 21.1External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1].Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing.Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA.Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab).Additional comments: The random number library used has a LGPL (http://www.gnu.org/copyleft/lesser.html) licence.Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium.References:
[1]
http://www.nvidia.com/object/cuda_home.html.
[2]
S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
  相似文献   
60.
Development of a pit filling algorithm for LiDAR canopy height models   总被引:1,自引:0,他引:1  
LiDAR canopy height models (CHMs) can exhibit unnatural looking holes or pits, i.e., pixels with a much lower digital number than their immediate neighbors. These artifacts may be caused by a combination of factors, from data acquisition to post-processing, that not only result in a noisy appearance to the CHM but may also limit semi-automated tree-crown delineation and lead to errors in biomass estimates. We present a highly effective semi-automated pit filling algorithm that interactively detects data pits based on a simple user-defined threshold, and then fills them with a value derived from their neighborhood. We briefly describe this algorithm and its graphical user interface, and show its result in a LiDAR CHM populated with data pits. This method can be rapidly applied to any CHM with minimal user interaction. Visualization confirms that our method effectively and quickly removes data pits.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号