首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2113篇
  免费   215篇
  国内免费   6篇
电工技术   41篇
综合类   11篇
化学工业   488篇
金属工艺   63篇
机械仪表   35篇
建筑科学   160篇
矿业工程   2篇
能源动力   63篇
轻工业   109篇
水利工程   6篇
无线电   219篇
一般工业技术   451篇
冶金工业   184篇
原子能技术   13篇
自动化技术   489篇
  2024年   4篇
  2023年   51篇
  2022年   54篇
  2021年   100篇
  2020年   86篇
  2019年   79篇
  2018年   87篇
  2017年   72篇
  2016年   101篇
  2015年   101篇
  2014年   123篇
  2013年   142篇
  2012年   154篇
  2011年   167篇
  2010年   145篇
  2009年   106篇
  2008年   105篇
  2007年   95篇
  2006年   87篇
  2005年   69篇
  2004年   36篇
  2003年   25篇
  2002年   26篇
  2001年   24篇
  2000年   13篇
  1999年   24篇
  1998年   58篇
  1997年   31篇
  1996年   17篇
  1995年   14篇
  1994年   10篇
  1993年   9篇
  1991年   4篇
  1989年   5篇
  1988年   4篇
  1986年   8篇
  1985年   4篇
  1981年   4篇
  1980年   5篇
  1979年   7篇
  1978年   7篇
  1977年   7篇
  1976年   14篇
  1975年   7篇
  1968年   2篇
  1943年   3篇
  1941年   3篇
  1940年   8篇
  1939年   3篇
  1938年   5篇
排序方式: 共有2334条查询结果,搜索用时 15 毫秒
71.
The incident indirect light over a range of image pixels is often coherent. Two common approaches to exploit this inter‐pixel coherence to improve rendering performance are Irradiance Caching and Radiance Caching. Both compute incident indirect light only for a small subset of pixels (the cache), and later interpolate between pixels. Irradiance Caching uses scalar values that can be interpolated efficiently, but cannot account for shading variations caused by normal and reflectance variation between cache items. Radiance Caching maintains directional information, e.g., to allow highlights between cache items, but at the cost of storing and evaluating a Spherical Harmonics (SH) function per pixel. The arithmetic and bandwidth cost for this evaluation is linear in the number of coefficients and can be substantial. In this paper, we propose a method to replace it by an efficient per‐cache item pre‐filtering based on MIP maps — such as previously done for environment maps — leading to a single constant‐time lookup per pixel. Additionally, per‐cache item geometry statistics stored in distance‐MIP maps are used to improve the quality of each pixel's lookup. Our approximate interactive global illumination approach is an order of magnitude faster than Radiance Caching with Phong BRDFs and can be combined with Monte Carlo‐raytracing, Point‐based Global Illumination or Instant Radiosity.  相似文献   
72.
This article presents a case study on retrospective verification of the Linux Virtual File System (VFS), which is aimed at checking violations of API usage rules and memory properties. Since VFS maintains dynamic data structures and is written in a mixture of C and inlined assembly, modern software model checkers cannot be applied. Our case study centres around our novel automated software verification tool, the SOCA Verifier, which symbolically executes and analyses compiled code. We describe how this verifier deals with complex features such as memory access, pointer aliasing and computed jumps in the VFS implementation, while reducing manual modelling to a minimum. Our results show that the SOCA Verifier is capable of analysing the complex Linux VFS implementation reliably and efficiently, thereby going beyond traditional testing tools and into niches that current software model checkers do not reach. This testifies to the SOCA Verifier’s suitability as an effective and efficient bug-finding tool during the development of operating system components.  相似文献   
73.
Music is a fundamental part of most cultures. Controlling music playback has commonly been used to demonstrate new interaction techniques and algorithms. In particular, controlling music playback has been used to demonstrate and evaluate gesture recognition algorithms. Previous work, however, used gestures that have been defined based on intuition, the developers’ preferences, and the respective algorithm’s capabilities. In this paper we propose a refined process for deriving gestures from constant user feedback. Using this process every result and design decision is validated in the subsequent step of the process. Therefore, comprehensive feedback can be collected from each of the conducted user studies. Along the process we develop a set of free-hand gestures for controlling music playback. The situational context is analysed to shape the usage scenario and derive an initial set of necessary functions. In a successive user study the set of functions is validated and proposals for gestures are collected from participants for each function. Two gesture sets containing static and dynamic gestures are derived and analysed in a comparative evaluation. The comparative evaluation shows the suitability of the identified gestures and allows further refinement. Our results indicate that the proposed process, that includes validation of each design decision, improves the final results. By using the process to identify gestures for controlling music playback we not only show that the refined process can successfully be applied, but we also provide a consistent gesture set that can serve as a realistic benchmark for gesture recognition algorithms.  相似文献   
74.
We provide a Mathematica code for decomposing strongly correlated quantum states described by a first-quantized, analytical wave function into many-body Fock states. Within them, the single-particle occupations refer to the subset of Fock–Darwin functions with no nodes. Such states, commonly appearing in two-dimensional systems subjected to gauge fields, were first discussed in the context of quantum Hall physics and are nowadays very relevant in the field of ultracold quantum gases. As important examples, we explicitly apply our decomposition scheme to the prominent Laughlin and Pfaffian states. This allows for easily calculating the overlap between arbitrary states with these highly correlated test states, and thus provides a useful tool to classify correlated quantum systems. Furthermore, we can directly read off the angular momentum distribution of a state from its decomposition. Finally we make use of our code to calculate the normalization factors for Laughlin?s famous quasi-particle/quasi-hole excitations, from which we gain insight into the intriguing fractional behavior of these excitations.Program summaryProgram title: StrongdecoCatalogue identifier: AELA_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AELA_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 5475No. of bytes in distributed program, including test data, etc.: 31 071Distribution format: tar.gzProgramming language: MathematicaComputer: Any computer on which Mathematica can be installedOperating system: Linux, Windows, MacClassification: 2.9Nature of problem: Analysis of strongly correlated quantum states.Solution method: The program makes use of the tools developed in Mathematica to deal with multivariate polynomials to decompose analytical strongly correlated states of bosons and fermions into a standard many-body basis. Operations with polynomials, determinants and permanents are the basic tools.Running time: The distributed notebook takes a couple of minutes to run.  相似文献   
75.
This study evaluates the implementation of physical coordination training (PCT) and cognitive behavioural training (CBTr) interventions in a randomised controlled trial at nine cleaners' workplaces. Female cleaners (n = 294) were randomised into a PCT, a CBTr or a reference (REF) group. Both 12-week interventions were performed in groups guided by an instructor. Records were kept on intervention dose (adherence) unanticipated events at the work place (context) and quality of intervention delivery (fidelity). Participant adherence was 37% in the PCT and 49% in the CBTr interventions. Optimal implementation was reached by only 6% in PCT and 42% in the CBTr. Analysis of the barriers to successful implementation indicated that the intervention process is sensitive to unanticipated events. In order to succeed in improving the health of high-risk populations such as cleaners and to correctly interpret intervention effects, more research on implementation is needed. Trial registration: ISRCTN96241850. PRACTITIONER SUMMARY: Both physical coordination training and cognitive behavioural training are potential effective workplace interventions among low educated job groups with high physical work demands. However, thorough consideration should be given to feasibility in the design of interventions. The optimal intervention should be tailored to closely match the implementation context and be robust and flexible to minimise susceptibility to changes in work organisation.  相似文献   
76.
77.
Recently there has been tremendous progress made in the research of novel nanotechnology for future nanoelectronic applications. In particular, several emerging nanoelectronic devices such as carbon-nanotube field-effect transistors (FETs), Si nanowire FETs, and planar III-V compound semiconductor (e.g., InSb, InAs) FETs, all hold promise as potential device candidates to be integrated onto the silicon platform for enhancing circuit functionality and also for extending Moore's Law. For high-performance and low-power logic transistor applications, it is important that these research devices are frequently benchmarked against the existing Si logic transistor data in order to gauge the progress of research. In this paper, we use four key device metrics to compare these emerging nanoelectronic devices to the state-of-the-art planar and nonplanar Si logic transistors. These four metrics include: 1) CV/I or intrinsic gate delay versus physical gate length L/sub g/; 2) energy-delay product versus L/sub g/; 3) subthreshold slope versus L/sub g/; and 4) CV/I versus on-to-off-state current ratio I/sub ON//I/sub OFF/. The results of this benchmarking exercise indicate that while these novel nanoelectronic devices show promise and opportunities for future logic applications, there still remain shortcomings in the device characteristics and electrostatics that need to be overcome. We believe that benchmarking is a key element in accelerating the progress of nanotechnology research for logic transistor applications.  相似文献   
78.
Summary We operationalize scientific output in a region by means of the number of articles (as in the SciSearch database) per year and technology output by means of the number of patent applications (as in the database of the European Patent Office) per priority year. All informetric analyses were done using the DIALOG online-system. The main research questions are the following: Which scientific and technological fields or topics are most influent within a region and which institutions or companies are mainly publishing articles or holding patents? Do the distributions of regional science and technology fields and of publishing institutions follow the well-known informetric function? Are there - as it is expected - only few fields and few institutions which dominate the region? Is there a connection between the economic power of a region and the regional publication and patent output? Examples studied in detail are seven German regions: Aachen, Düsseldorf, Hamburg, Köln (Cologne), Leipzig - Halle - Dessau, München (Munich), and Stuttgart. Three different indicators were used, science and technology attraction of a region (number of scientific articles and patents), science and technology intensity (articles and patents per 1,000 inhabitants), and science and technology density (articles and patents per 1 billion EURO gross value added). Top region concerning both attraction and intensity is Munich, concerning density it is Aachen.  相似文献   
79.
80.
Investigating the dynamical and physical properties of cosmic dust can reveal a great deal of information about both the dust and its many sources. Over recent years, several spacecraft (e.g., Cassini, Stardust, Galileo, and Ulysses) have successfully characterised interstellar, interplanetary, and circumplanetary dust using a variety of techniques, including in situ analyses and sample return. Charge, mass, and velocity measurements of the dust are performed either directly (induced charge signals) or indirectly (mass and velocity from impact ionisation signals or crater morphology) and constrain the dynamical parameters of the dust grains. Dust compositional information may be obtained via either time-of-flight mass spectrometry of the impact plasma or direct sample return. The accurate and reliable interpretation of collected spacecraft data requires a comprehensive programme of terrestrial instrument calibration. This process involves accelerating suitable solar system analogue dust particles to hypervelocity speeds in the laboratory, an activity performed at the Max Planck Institut fu?r Kernphysik in Heidelberg, Germany. Here, a 2 MV Van de Graaff accelerator electrostatically accelerates charged micron and submicron-sized dust particles to speeds up to 80 km s(-1). Recent advances in dust production and processing have allowed solar system analogue dust particles (silicates and other minerals) to be coated with a thin conductive shell, enabling them to be charged and accelerated. Refinements and upgrades to the beam line instrumentation and electronics now allow for the reliable selection of particles at velocities of 1-80 km s(-1) and with diameters of between 0.05 μm and 5 μm. This ability to select particles for subsequent impact studies based on their charges, masses, or velocities is provided by a particle selection unit (PSU). The PSU contains a field programmable gate array, capable of monitoring in real time the particles' speeds and charges, and is controlled remotely by a custom, platform independent, software package. The new control instrumentation and electronics, together with the wide range of accelerable particle types, allow the controlled investigation of hypervelocity impact phenomena across a hitherto unobtainable range of impact parameters.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号