首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2989篇
  免费   179篇
  国内免费   4篇
电工技术   28篇
化学工业   718篇
金属工艺   56篇
机械仪表   60篇
建筑科学   98篇
矿业工程   3篇
能源动力   102篇
轻工业   451篇
水利工程   18篇
石油天然气   11篇
无线电   194篇
一般工业技术   485篇
冶金工业   390篇
原子能技术   38篇
自动化技术   520篇
  2024年   5篇
  2023年   33篇
  2022年   78篇
  2021年   119篇
  2020年   86篇
  2019年   109篇
  2018年   102篇
  2017年   83篇
  2016年   101篇
  2015年   74篇
  2014年   146篇
  2013年   225篇
  2012年   188篇
  2011年   192篇
  2010年   134篇
  2009年   162篇
  2008年   151篇
  2007年   136篇
  2006年   112篇
  2005年   97篇
  2004年   73篇
  2003年   86篇
  2002年   61篇
  2001年   42篇
  2000年   35篇
  1999年   41篇
  1998年   127篇
  1997年   77篇
  1996年   62篇
  1995年   39篇
  1994年   35篇
  1993年   39篇
  1992年   14篇
  1991年   5篇
  1990年   12篇
  1989年   8篇
  1988年   14篇
  1987年   10篇
  1986年   7篇
  1985年   9篇
  1984年   2篇
  1983年   3篇
  1981年   2篇
  1980年   3篇
  1979年   2篇
  1978年   3篇
  1977年   5篇
  1976年   14篇
  1971年   3篇
  1960年   1篇
排序方式: 共有3172条查询结果,搜索用时 31 毫秒
91.
Exception handling in workflow management systems   总被引:1,自引:0,他引:1  
Fault tolerance is a key requirement in process support systems (PSS), a class of distributed computing middleware encompassing applications such as workflow management systems and process centered software engineering environments. A PSS controls the flow of work between programs and users in networked environments based on a “metaprogram” (the process). The resulting applications are characterized by a high degree of distribution and a high degree of heterogeneity (properties that make fault tolerance both highly desirable and difficult to achieve). We present a solution for implementing more reliable processes by using exception handling, as it is used in programming languages, and atomicity, as it is known from the transaction concept in database management systems. We describe the mechanism incorporating both transactions and exceptions and present a validation technique allowing to assess the correctness of process specifications  相似文献   
92.
Since wavelets were introduced in the radiosity algorithm 5, surprisingly little research has been devoted to higher order wavelets and their use in radiosity algorithms. A previous study 13 has shown that wavelet radiosity, and especially higher order wavelet radiosity was not bringing significant improvements over hierarchical radiosity and was having a very important extra memory cost, thus prohibiting any effective computation. In this paper, we present a new implementation of wavelets in the radiosity algorithm, that is substantially different from previous implementations in several key areas (refinement oracle, link storage, resolution algorithm). We show that, with this implementation, higher order wavelets are actually bringing an improvement over standard hierarchical radiosity and lower order wavelets.  相似文献   
93.
94.
aITALC, a new tool for automating loop calculations in high energy physics, is described. The package creates Fortran code for two-fermion scattering processes automatically, starting from the generation and analysis of the Feynman graphs. We describe the modules of the tool, the intercommunication between them and illustrate its use with three examples.

Program summary

Title of the program:aITALC version 1.2.1 (9 August 2005)Catalogue identifier:ADWOProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWOProgram obtainable from:CPC Program Library, Queen's University of Belfast, N. IrelandComputer:PC i386Operating system:GNU/Linux, tested on different distributions SuSE 8.2 to 9.3, Red Hat 7.2, Debian 3.0, Ubuntu 5.04. Also on SolarisProgramming language used:GNU Make, Diana, Form, Fortran77Additional programs/libraries used:Diana 2.35 (Qgraf 2.0), Form 3.1, LoopTools 2.1 (FF)Memory required to execute with typical data:Up to about 10 MBNo. of processors used:1No. of lines in distributed program, including test data, etc.:40 926No. of bytes in distributed program, including test data, etc.:371 424Distribution format:tar gzip fileHigh-speed storage required:from 1.5 to 30 MB, depending on modules present and unfolding of examplesNature of the physical problem:Calculation of differential cross sections for e+e annihilation in one-loop approximation.Method of solution:Generation and perturbative analysis of Feynman diagrams with later evaluation of matrix elements and form factors.Restriction of the complexity of the problem:The limit of application is, for the moment, the 2→2 particle reactions in the electro-weak standard model.Typical running time:Few minutes, being highly depending on the complexity of the process and the Fortran compiler.  相似文献   
95.
This paper presents a framework for weighted fusion of several active shape and active appearance models. The approach is based on the eigenspace fusion method proposed by Hall et al., which has been extended to fuse more than two weighted eigenspaces using unbiased mean and covariance matrix estimates. To evaluate the performance of fusion, a comparative assessment on segmentation precision as well as facial verification tests are performed using the AR, EQUINOX, and XM2VTS databases. Based on the results, it is concluded that the fusion is useful when the model needs to be updated online or when the original observations are absent  相似文献   
96.
Computer Supported Collaborative Learning is a pedagogical approach that can be used for deploying educational games in the classroom. However, there is no clear understanding as to which technological platforms are better suited for deploying co-located collaborative games, nor the general affordances that are required. In this work we explore two different technological platforms for developing collaborative games in the classroom: one based on augmented reality technology and the other based on multiple-mice technology. In both cases, the same game was introduced to teach electrostatics and the results were compared experimentally using a real class.  相似文献   
97.
Question–answering systems make good use of knowledge bases (KBs, e.g., Wikipedia) for responding to definition queries. Typically, systems extract relevant facts from articles regarding the question across KBs, and then they are projected into the candidate answers. However, studies have shown that the performance of this kind of method suddenly drops, whenever KBs supply narrow coverage. This work describes a new approach to deal with this problem by constructing context models for scoring candidate answers, which are, more precisely, statistical n‐gram language models inferred from lexicalized dependency paths extracted from Wikipedia abstracts. Unlike state‐of‐the‐art approaches, context models are created by capturing the semantics of candidate answers (e.g., “novel,”“singer,”“coach,” and “city”). This work is extended by investigating the impact on context models of extra linguistic knowledge such as part‐of‐speech tagging and named‐entity recognition. Results showed the effectiveness of context models as n‐gram lexicalized dependency paths and promising context indicators for the presence of definitions in natural language texts.  相似文献   
98.
99.
In this paper we present the "R&W Simulator" (version 3.0), a Java simulator of Rescorla and Wagner's prediction error model of learning. It is able to run whole experimental designs, and compute and display the associative values of elemental and compound stimuli simultaneously, as well as use extra configural cues in generating compound values; it also permits change of the US parameters across phases. The simulator produces both numerical and graphical outputs, and includes a functionality to export the results to a data processor spreadsheet. It is user-friendly, and built with a graphical interface designed to allow neuroscience researchers to input the data in their own "language". It is a cross-platform simulator, so it does not require any special equipment, operative system or support program, and does not need installation. The "R&W Simulator" (version 3.0) is available free.  相似文献   
100.
The complexity of the data warehouse (DW) development process requires to follow a methodological approach in order to be successful. A widely accepted approach for this development is the hybrid one, in which requirements and data sources must be accommodated to a new DW model. The main problem is that we lose the relationships between requirements, elements in the multidimensional (MD) conceptual models and data sources in the process, since no traceability is explicitly specified. Therefore, this hurts requirements validation capability and increases the complexity of Extraction, Transformation and Loading processes. In this paper, we propose a novel trace metamodel for DWs and focus on the relationships between requirements and MD conceptual models. We propose a set of Query/View/Transformation rules to include traceability in DWs in an automatic way, allowing us to obtain a MD conceptual model of the DW, as well as a trace model. Therefore, we are able to trace every requirement to the MD elements, further increasing user satisfaction. Finally, we show the implementation in our Lucentia BI tool.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号