首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4561篇
  免费   104篇
  国内免费   6篇
电工技术   124篇
综合类   8篇
化学工业   1073篇
金属工艺   90篇
机械仪表   95篇
建筑科学   283篇
矿业工程   20篇
能源动力   106篇
轻工业   468篇
水利工程   57篇
石油天然气   8篇
无线电   325篇
一般工业技术   587篇
冶金工业   847篇
原子能技术   75篇
自动化技术   505篇
  2022年   45篇
  2021年   71篇
  2020年   44篇
  2019年   57篇
  2018年   45篇
  2017年   55篇
  2016年   66篇
  2015年   71篇
  2014年   103篇
  2013年   193篇
  2012年   136篇
  2011年   209篇
  2010年   133篇
  2009年   137篇
  2008年   170篇
  2007年   144篇
  2006年   149篇
  2005年   117篇
  2004年   106篇
  2003年   101篇
  2002年   103篇
  2001年   68篇
  2000年   66篇
  1999年   81篇
  1998年   144篇
  1997年   118篇
  1996年   87篇
  1995年   86篇
  1994年   73篇
  1993年   63篇
  1992年   51篇
  1991年   50篇
  1990年   63篇
  1989年   59篇
  1988年   52篇
  1987年   59篇
  1986年   51篇
  1985年   65篇
  1984年   61篇
  1983年   63篇
  1982年   53篇
  1981年   44篇
  1979年   48篇
  1978年   60篇
  1977年   61篇
  1976年   69篇
  1975年   55篇
  1974年   52篇
  1973年   59篇
  1972年   43篇
排序方式: 共有4671条查询结果,搜索用时 0 毫秒
101.
The LR(0) goto-graph is the basis for the construction of parsers for several interesting grammar classes such as LALR and GLR. Early work has shown that even when a grammar is an extension to another, the goto-graph of the first is not necessarily a subgraph of the second. Some authors presented algorithms to grow and shrink these graphs incrementally, but the formal proof of the existence of a particular relation between a given goto-graph and a grown or shrunk counterpart seems to be still missing in literature as of today. In this paper we use the recursive projection of paths of limited length to prove the existence of one such relation, when the sets of productions are in a subset relation. We also use this relation to present two algorithms (Grow and Shrink) that transform the goto-graph of a given grammar into the goto-graph of an extension or a restriction to that grammar. We implemented these algorithms in a dynamically updatable LALR parser generator called DEXTER (the Dynamically EXTEnsible Recognizer) that we are now shipping with our current implementation of the Neverlang framework for programming language development.  相似文献   
102.
Reorganisation and evolution of class hierarchies is important for object-oriented system development and has received considerable attention in the literature. The contributions of this paper are: (1) a formal study of a set of extension relations and transformations on class hierarchies; (2) a presentation of a small set of primitive transformations which form a minimal and complete basis for the extension relations; and (3) an analysis of the impact of these transformations at the object level.The study leads to a better understanding of evolution and reuse of object-oriented software and class hierarchies. It also provides a terminology and a means of classification for design reuse. The theory presented in this paper is based on the Demeter data model, which gives a concise mathematical foundation for classes and their inheritance and part-of relationships. Parts of the theory have been implemented in the Demeter System TM C++, a CASE tool for object-oriented design and programming.  相似文献   
103.

A generic model of trust for electronic commerce is presented. The basic components of the model are party trust and control trust. It is argued that an agent's trust in a transaction with another party is a combination of the trust in the other party and the trust in the control mechanisms for the successful performance of the transaction. The generic trust model can be used for the design of trust related value-added services in electronic commerce. To illustrate this design use of the model, two activities in electronic commerce are compared that require trust, namely electronic payment and cross-border electronic trade. It is shown with the model that these two activities actually require two different types of trust, and that complete different services are needed to create these different types of trust.  相似文献   
104.
Minimal models of adapted neuronal response to in vivo-like input currents   总被引:1,自引:0,他引:1  
Rate models are often used to study the behavior of large networks of spiking neurons. Here we propose a procedure to derive rate models that take into account the fluctuations of the input current and firing-rate adaptation, two ubiquitous features in the central nervous system that have been previously overlooked in constructing rate models. The procedure is general and applies to any model of firing unit. As examples, we apply it to the leaky integrate-and-fire (IF) neuron, the leaky IF neuron with reversal potentials, and to the quadratic IF neuron. Two mechanisms of adaptation are considered, one due to an afterhyperpolarization current and the other to an adapting threshold for spike emission. The parameters of these simple models can be tuned to match experimental data obtained from neocortical pyramidal neurons. Finally, we show how the stationary model can be used to predict the time-varying activity of a large population of adapting neurons.  相似文献   
105.
Virtual execution environments, such as the Java virtual machine, promote platform‐independent software development. However, when it comes to analyzing algorithm complexity and performance bottlenecks, available tools focus on platform‐specific metrics, such as the CPU time consumption on a particular system. Other drawbacks of many prevailing profiling tools are high overhead, significant measurement perturbation, as well as reduced portability of profiling tools, which are often implemented in platform‐dependent native code. This article presents a novel profiling approach, which is entirely based on program transformation techniques, in order to build a profiling data structure that provides calling‐context‐sensitive program execution statistics. We explore the use of platform‐independent profiling metrics in order to make the instrumentation entirely portable and to generate reproducible profiles. We implemented these ideas within a Java‐based profiling tool called JP. A significant novelty is that this tool achieves complete bytecode coverage by statically instrumenting the core runtime libraries and dynamically instrumenting the rest of the code. JP provides a small and flexible API to write customized profiling agents in pure Java, which are periodically activated to process the collected profiling information. Performance measurements point out that, despite the presence of dynamic instrumentation, JP causes significantly less overhead than a prevailing tool for the profiling of Java code. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
106.
Originally developed with a single language in mind, the JVM is now targeted by numerous programming languages—its automatic memory management, just‐in‐time compilation, and adaptive optimizations—making it an attractive execution platform. However, the garbage collector, the just‐in‐time compiler, and other optimizations and heuristics were designed primarily with the performance of Java programs in mind. Consequently, many of the languages targeting the JVM, and especially the dynamically typed languages, are suffering from performance problems that cannot be simply solved at the JVM side. In this article, we aim to contribute to the understanding of the character of the workloads imposed on the JVM by both dynamically typed and statically typed JVM languages. To this end, we introduce a new set of dynamic metrics for workload characterization, along with an easy‐to‐use toolchain to collect the metrics. We apply the toolchain to applications written in six JVM languages (Java, Scala, Clojure, Jython, JRuby, and JavaScript) and discuss the findings. Given the recently identified importance of inlining for the performance of Scala programs, we also analyze the inlining behavior of the HotSpot JVM when executing bytecode originating from different JVM languages. As a result, we identify several traits in the non‐Java workloads that represent potential opportunities for optimization. © 2015 The Authors. Software: Practice and Experience Published by John Wiley & Sons Ltd.  相似文献   
107.
With the recent ban of pentabromodiphenyl ether (technical PentaBDE) and octabromodiphenyl ether (technical OctaBDE) mixtures in the European Union (EU) and in parts of the United States, decabromodiphenyl ether (technical DecaBDE) remains as the only polybrominated diphenyl ether (PBDE) based flame retardant available, today. The EU risk assessment report for DecaBDE identified a high level of uncertainty associated with the suitability of the current risk assessment approach for secondary poisoning by debromination of DecaBDE to toxic lower brominated diphenylethers. Addressing this still open question, we investigated concentrations and temporal trends of DecaBDE, NonaBDE, and OctaBDE congeners in the sediments of Greifensee, a small lake located in an urban area close to Zürich, Switzerland. PBDE appeared first in sediment layers corresponding to the mid 1970s. While total Tri-HeptaBDE (BDE-28, -47, -99, -100, -153, -154 and -183) concentrations leveled off in the mid 1990s to about 1.6 ng/g dw (dry weight), DecaBDE levels increased steadily to 7.4 ng/g dw in 2001 with a doubling time of 9 years. Hexabromocyclododecanes (HBCD) appeared in Greifensee sediments in the mid 1980s. They are an important class of flame retardants that are being used in increasing amounts, today. As was observed for DecaBDE, HBCD concentrations were continuously increasing to reach 2.5 ng/g dw in 2001. Next to DecaBDE, all 3 NonaBDE congeners (BDE-208, BDE-207, and BDE-206) and at least 7 out of the 12 possible OctaBDE congeners (BDE-202, BDE-201, BDE-197/204, BDE-198/203, BDE-196/200, BDE-205, and BDE-194) were detected in the sediments of Greifensee. Highest concentrations were found in the surface sediments with 7.2, 0.26, 0.14, and 1.6 ng/g dw for Deca-, Nona-, Octa-, and the sum of Tri-HeptaBDE, respectively. While DecaBDE and NonaBDE were found to increase rapidly, the increase of OctaBDE was slower. Congener patterns of Octa- and NonaBDE present in sediments of Greifensee did not change with time. Consequently, there was no evidence for sediment mediated long-term transformation of PBDE within the observed time span of almost 30 years. Despite the high persistence of DecaBDE, environmental debromination occurs, as shown by the detection of a shift in congener patterns of Octa- and NonaBDE in sediments, compared to the respective congener patterns in technical PBDE products. The OctaBDE congener BDE-202 was detected in sediments, representing a transformation product that is not reported in any of the technical PBDE products. Comparison of OctaBDE congener patterns in sediments with OctaBDE congener patterns from known sources reveals that (i) they were distinctively different from the congener patterns in technical PBDE products and (ii) that they were similar to the OctaBDE patterns in house dust and photodegradation products of DecaBDE, suggesting contributions from these sources.  相似文献   
108.
An unknown red dye was discovered in a sumac spice sample during routine analysis for Sudan dyes. LC-DAD and LC-MS/MS did not reveal the identity of the red substance. Nevertheless, using LC-high-resolution MS and isotope ratio comparisons the structure was identified as Basic Red 46. The identity of the dye was further confirmed by comparison with a commercial hair-staining product and two textile dye formulations containing Basic Red 46. Analogous to the Sudan dyes, Basic Red 46 is an azo dye. However, some of the sample clean-up methodology utilised for the analysis of Sudan dyes in food prevents its successful detection. In contrast to the Sudan dyes, Basic Red 46 is a cation. Its cationic properties make it bind strongly to gel permeation columns and silica solid-phase extraction cartridges and prevent elution with standard eluents. This is the first report of Basic Red 46 in food. The structure elucidation of this compound as well as the disadvantages of analytical methods focusing on a narrow group of targeted analytes are discussed.  相似文献   
109.
Code injection attacks are one of the most powerful and important classes of attacks on software. In these attacks, the attacker sends malicious input to a software application, where it is stored in memory. The malicious input is chosen in such a way that its representation in memory is also a valid representation of a machine code program that performs actions chosen by the attacker. The attacker then triggers a bug in the application to divert the control flow to this injected machine code. A typical action of the injected code is to launch a command interpreter shell, and hence the malicious input is often called shellcode. Attacks are usually performed against network facing applications, and such applications often perform validations or encodings on input. Hence, a typical hurdle for attackers, is that the shellcode has to pass one or more filtering methods before it is stored in the vulnerable application??s memory space. Clearly, for a code injection attack to succeed, the malicious input must survive such validations and transformations. Alphanumeric input (consisting only of letters and digits) is typically very robust for this purpose: it passes most filters and is untouched by most transformations. This paper studies the power of alphanumeric shellcode on the ARM architecture. It shows that the subset of ARM machine code programs that (when interpreted as data) consist only of alphanumerical characters is a Turing complete subset. This is a non-trivial result, as the number of instructions that consist only of alphanumeric characters is very limited. To craft useful exploit code (and to achieve Turing completeness), several tricks are needed, including the use of self-modifying code.  相似文献   
110.
A formal methodology is proposed to reduce the amount of information displayed to remote human operators at interfaces to large-scale process control plants of a certain type.The reduction proceeds in two stages.In the first stage,minimal reduced subsets of components,which give full information about the state of the whole system,are generated by determining functional dependencies between components.This is achieved by using a temporal logic proof obligation to check whether the state of all components can be inferred from the state of components in a subset in specified situations that the human operator needs to detect,with respect to a finite state machine model of the system and other human operator behavior.Generation of reduced subsets is automated with the help of a temporal logic model checker.The second stage determines the interconnections between components to be displayed in the reduced system so that the natural overall graphical structure of the system is maintained.A formal definition of an aesthetic for the required subgraph of a graph representation of the full system,containing the reduced subset of components,is given for this purpose. The methodology is demonstrated by a case study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号