首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6091篇
  免费   227篇
  国内免费   6篇
电工技术   139篇
综合类   9篇
化学工业   1487篇
金属工艺   118篇
机械仪表   128篇
建筑科学   327篇
矿业工程   22篇
能源动力   163篇
轻工业   717篇
水利工程   70篇
石油天然气   17篇
无线电   417篇
一般工业技术   857篇
冶金工业   907篇
原子能技术   97篇
自动化技术   849篇
  2023年   50篇
  2022年   107篇
  2021年   172篇
  2020年   109篇
  2019年   133篇
  2018年   109篇
  2017年   120篇
  2016年   135篇
  2015年   120篇
  2014年   201篇
  2013年   329篇
  2012年   264篇
  2011年   332篇
  2010年   201篇
  2009年   219篇
  2008年   245篇
  2007年   211篇
  2006年   202篇
  2005年   156篇
  2004年   140篇
  2003年   139篇
  2002年   128篇
  2001年   80篇
  2000年   75篇
  1999年   91篇
  1998年   157篇
  1997年   126篇
  1996年   95篇
  1995年   91篇
  1994年   80篇
  1993年   76篇
  1992年   53篇
  1991年   50篇
  1990年   67篇
  1989年   60篇
  1988年   54篇
  1987年   61篇
  1986年   52篇
  1985年   67篇
  1984年   62篇
  1983年   64篇
  1982年   53篇
  1981年   45篇
  1979年   48篇
  1978年   61篇
  1977年   61篇
  1976年   69篇
  1975年   55篇
  1974年   52篇
  1973年   59篇
排序方式: 共有6324条查询结果,搜索用时 31 毫秒
141.
Bytecode instrumentation is a widely used technique to implement aspect weaving and dynamic analyses in virtual machines such as the Java virtual machine. Aspect weavers and other instrumentations are usually developed independently and combining them often requires significant engineering effort, if at all possible. In this article, we present polymorphic bytecode instrumentation(PBI), a simple but effective technique that allows dynamic dispatch amongst several, possibly independent instrumentations. PBI enables complete bytecode coverage, that is, any method with a bytecode representation can be instrumented. We illustrate further benefits of PBI with three case studies. First, we describe how PBI can be used to implement a comprehensive profiler of inter‐procedural and intra‐procedural control flow. Second, we provide an implementation of execution levels for AspectJ, which avoids infinite regression and unwanted interference between aspects. Third, we present a framework for adaptive dynamic analysis, where the analysis to be performed can be changed at runtime by the user. We assess the overhead introduced by PBI and provide thorough performance evaluations of PBI in all three case studies. We show that pure Java profilers like JP2 can, thanks to PBI, produce accurate execution profiles by covering all code, including the core Java libraries. We then demonstrate that PBI‐based execution levels are much faster than control flow pointcuts to avoid interference between aspects and that their efficient integration in a practical aspect language is possible. Finally, we report that PBI enables adaptive dynamic analysis tools that are more reactive to user inputs than existing tools that rely on dynamic aspect‐oriented programming with runtime weaving. These experiments position PBI as a widely applicable and practical approach for combining bytecode instrumentations. © 2015 The Authors. Software: Practice and Experience Published by John Wiley & Sons Ltd.  相似文献   
142.
Originally developed with a single language in mind, the JVM is now targeted by numerous programming languages—its automatic memory management, just‐in‐time compilation, and adaptive optimizations—making it an attractive execution platform. However, the garbage collector, the just‐in‐time compiler, and other optimizations and heuristics were designed primarily with the performance of Java programs in mind. Consequently, many of the languages targeting the JVM, and especially the dynamically typed languages, are suffering from performance problems that cannot be simply solved at the JVM side. In this article, we aim to contribute to the understanding of the character of the workloads imposed on the JVM by both dynamically typed and statically typed JVM languages. To this end, we introduce a new set of dynamic metrics for workload characterization, along with an easy‐to‐use toolchain to collect the metrics. We apply the toolchain to applications written in six JVM languages (Java, Scala, Clojure, Jython, JRuby, and JavaScript) and discuss the findings. Given the recently identified importance of inlining for the performance of Scala programs, we also analyze the inlining behavior of the HotSpot JVM when executing bytecode originating from different JVM languages. As a result, we identify several traits in the non‐Java workloads that represent potential opportunities for optimization. © 2015 The Authors. Software: Practice and Experience Published by John Wiley & Sons Ltd.  相似文献   
143.
Textual requirements are very common in software projects. However, this format of requirements often keeps relevant concerns (e.g., performance, synchronization, data access, etc.) from the analyst’s view because their semantics are implicit in the text. Thus, analysts must carefully review requirements documents in order to identify key concerns and their effects. Concern mining tools based on NLP techniques can help in this activity. Nonetheless, existing tools cannot always detect all the crosscutting effects of a given concern on different requirements sections, as this detection requires a semantic analysis of the text. In this work, we describe an automated tool called REAssistant that supports the extraction of semantic information from textual use cases in order to reveal latent crosscutting concerns. To enable the analysis of use cases, we apply a tandem of advanced NLP techniques (e.g, dependency parsing, semantic role labeling, and domain actions) built on the UIMA framework, which generates different annotations for the use cases. Then, REAssistant allows analysts to query these annotations via concern-specific rules in order to identify all the effects of a given concern. The REAssistant tool has been evaluated with several case-studies, showing good results when compared to a manual identification of concerns and a third-party tool. In particular, the tool achieved a remarkable recall regarding the detection of crosscutting concern effects.  相似文献   
144.
In this paper, a switched control architecture for constrained control systems is presented. The strategy is based on command governor ideas that are here specialized to ‘optimally’ schedule switching events on the plant dynamics for improving control performance at the expense of low computational burdens. The significance of the method mainly lies in its capability to avoid constraints violation and loss of stability regardless of any configuration change occurrence in the plant/constraint structure. To this end, the concept of model transition dwell time is used within the proposed control framework to formally define the minimum time necessary to enable a switching event under guaranteed conditions on the overall stability and constraint fulfilment. Simulation results on a simple linear system and on a Cessna 182 aircraft model show the effectiveness of the proposed strategy. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   
145.
For the basic problem of scheduling a set of n independent jobs on a set of m identical parallel machines with the objective of maximizing the minimum machine completion time—also referred to as machine covering—we propose a new exact branch-and-bound algorithm. Its most distinctive components are a different symmetry-breaking solution representation, enhanced lower and upper bounds, and effective novel dominance criteria derived from structural patterns of optimal schedules. Results of a comprehensive computational study conducted on benchmark instances attest to the effectiveness of our approach, particularly for small ratios of n to m.  相似文献   
146.
This work aims at discovering and extracting relevant patterns underlying social interactions. To do so, some knowledge extracted from Facebook, a social networking site, is formalised by means of an Extended Social Graph, a data structure which goes beyond the original concept of a social graph by also incorporating information on interests. When the Extended Social Graph is built, state-of-the-art techniques are applied over it in order to discover communities. Once these social communities are found, statistical techniques will look for relevant patterns common to each of those, in such a way that each cluster of users is characterised by a set of common features. The resulting knowledge will be used to develop and evaluate a social recommender system, which aims at suggesting users in a social network with possible friends or interests.  相似文献   
147.
Cognition, Technology & Work - From the 1950s through the 1980s, aircraft design was marked by an increase in reliability and automation, and, correspondingly, a decrease in the crew complement...  相似文献   
148.
Walter Zulehner 《Computing》2000,65(3):227-246
In this paper smoothing properties are shown for a class of iterative methods for saddle point problems with smoothing rates of the order 1/m, where m is the number of smoothing steps. This generalizes recent results by Braess and Sarazin, who could prove this rates for methods where, in the context of the Stokes problem, the pressure correction equation is solved exactly, which is not needed here. Received December 4, 1998; revised April 14, 2000  相似文献   
149.
150.
aITALC, a new tool for automating loop calculations in high energy physics, is described. The package creates Fortran code for two-fermion scattering processes automatically, starting from the generation and analysis of the Feynman graphs. We describe the modules of the tool, the intercommunication between them and illustrate its use with three examples.

Program summary

Title of the program:aITALC version 1.2.1 (9 August 2005)Catalogue identifier:ADWOProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWOProgram obtainable from:CPC Program Library, Queen's University of Belfast, N. IrelandComputer:PC i386Operating system:GNU/Linux, tested on different distributions SuSE 8.2 to 9.3, Red Hat 7.2, Debian 3.0, Ubuntu 5.04. Also on SolarisProgramming language used:GNU Make, Diana, Form, Fortran77Additional programs/libraries used:Diana 2.35 (Qgraf 2.0), Form 3.1, LoopTools 2.1 (FF)Memory required to execute with typical data:Up to about 10 MBNo. of processors used:1No. of lines in distributed program, including test data, etc.:40 926No. of bytes in distributed program, including test data, etc.:371 424Distribution format:tar gzip fileHigh-speed storage required:from 1.5 to 30 MB, depending on modules present and unfolding of examplesNature of the physical problem:Calculation of differential cross sections for e+e annihilation in one-loop approximation.Method of solution:Generation and perturbative analysis of Feynman diagrams with later evaluation of matrix elements and form factors.Restriction of the complexity of the problem:The limit of application is, for the moment, the 2→2 particle reactions in the electro-weak standard model.Typical running time:Few minutes, being highly depending on the complexity of the process and the Fortran compiler.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号