首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18120篇
  免费   469篇
  国内免费   18篇
电工技术   239篇
综合类   9篇
化学工业   3605篇
金属工艺   280篇
机械仪表   310篇
建筑科学   1101篇
矿业工程   109篇
能源动力   505篇
轻工业   1383篇
水利工程   158篇
石油天然气   194篇
武器工业   1篇
无线电   1064篇
一般工业技术   2880篇
冶金工业   4167篇
原子能技术   162篇
自动化技术   2440篇
  2022年   138篇
  2021年   214篇
  2020年   170篇
  2019年   226篇
  2018年   268篇
  2017年   260篇
  2016年   320篇
  2015年   239篇
  2014年   381篇
  2013年   1102篇
  2012年   664篇
  2011年   871篇
  2010年   674篇
  2009年   664篇
  2008年   863篇
  2007年   873篇
  2006年   687篇
  2005年   721篇
  2004年   560篇
  2003年   570篇
  2002年   512篇
  2001年   330篇
  2000年   300篇
  1999年   303篇
  1998年   313篇
  1997年   300篇
  1996年   300篇
  1995年   333篇
  1994年   267篇
  1993年   303篇
  1992年   272篇
  1991年   164篇
  1990年   237篇
  1989年   283篇
  1988年   204篇
  1987年   215篇
  1986年   211篇
  1985年   265篇
  1984年   258篇
  1983年   227篇
  1982年   217篇
  1981年   213篇
  1980年   172篇
  1979年   187篇
  1978年   176篇
  1977年   178篇
  1976年   147篇
  1975年   178篇
  1974年   165篇
  1973年   133篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
991.
Glenn  Richard R.  Suresh   《Computers & Security》2006,25(8):600-615
Network Denial-of-Service (DoS) attacks that disable network services by flooding them with spurious packets are on the rise. Criminals with large networks (botnets) of compromised nodes (zombies) use the threat of DoS attacks to extort legitimate companies. To fight these threats and ensure network reliability, early detection of these attacks is critical. Many methods have been developed with limited success to date. This paper presents an approach that identifies change points in the time series of network packet arrival rates. The proposed process has two stages: (i) statistical analysis that finds the rate of increase of network traffic, and (ii) wavelet analysis of the network statistics that quickly detects the sudden increases in packet arrival rates characteristic of botnet attacks.Most intrusion detections are tested using data sets from special security testing configurations, which leads to unacceptable false positive rates being found when they are used in the real world. We test our approach using data from both network simulations and a large operational network. The true and false positive detection rates are determined for both data sets, and receiver operating curves use these rates to find optimal parameters for our approach. Evaluation using operational data proves the effectiveness of our approach.  相似文献   
992.
993.
Computational science is increasingly supporting advances in scientific and engineering knowledge. The unique constraints of these types of projects result in a development process that differs from the process more traditional information technology projects use. This article reports the results of the sixth case study conducted under the support of the Darpa High Productivity Computing Systems Program. The case study aimed to investigate the technical challenges of code development in this environment, understand the use of development tools, and document the findings as concrete lessons learned for other developers' benefit. The project studied here is a major component of a weather forecasting system of systems. It includes complex behavior and interaction of several individual physical systems (such as the atmosphere and the ocean). This article describes the development of the code and presents important lessons learned.  相似文献   
994.
Modern computer-controlled robots typically fail at their tasks whenever they encounter an error, no matter how minor. The physical environment of a typical assembly robot is too unstructured to benefit from conventional software approaches to reliability. We present an approach which interfaces the real-time operation of the robot with an intelligent subsystem that performs error analysis and forward recovery if a failure occurs. Our approach involves a special representation of the robot's program that efficiently tracks the robot's operation in real time and is easy to modify to include automatically generated recovery procedures.  相似文献   
995.
Several related algorithms are presented for computing logarithms in fieldsGF(p),p a prime. Heuristic arguments predict a running time of exp((1+o(1)) ) for the initial precomputation phase that is needed for eachp, and much shorter running times for computing individual logarithms once the precomputation is done. The running time of the precomputation is roughly the same as that of the fastest known algorithms for factoring integers of size aboutp. The algorithms use the well known basic scheme of obtaining linear equations for logarithms of small primes and then solving them to obtain a database to be used for the computation of individual logarithms. The novel ingredients are new ways of obtaining linear equations and new methods of solving these linear equations by adaptations of sparse matrix methods from numerical analysis to the case of finite rings. While some of the new logarithm algorithms are adaptations of known integer factorization algorithms, others are new and can be adapted to yield integer factorization algorithms.  相似文献   
996.
Comparing tree-structured data for structural similarity is a recurring theme and one on which much effort has been spent. Most approaches so far are grounded, implicitly or explicitly, in algorithmic information theory, being approximations to an information distance derived from Kolmogorov complexity. In this paper we propose a novel complexity metric, also grounded in information theory, but calculated via Shannon's entropy equations. This is used to formulate a directly and efficiently computable metric for the structural difference between unordered trees. The paper explains the derivation of the metric in terms of information theory, and proves the essential property that it is a distance metric. The property of boundedness means that the metric can be used in contexts such as clustering, where second-order comparisons are required. The distance metric property means that the metric can be used in the context of similarity search and metric spaces in general, allowing trees to be indexed and stored within this domain. We are not aware of any other tree similarity metric with these properties.  相似文献   
997.
Medication omissions and dosing failures are frequent during transitions in patient care. Medication reconciliation (MR) requires bridging discrepancies in a patient’s medical history as a setting for care changes. MR has been identified as vulnerable to failure, and a clinician’s cognition during MR remains poorly described in the literature. We sought to explore cognition in MR tasks. Specifically, we sought to explore how clinicians make sense of conditions and medications. We observed 24 anesthesia providers performing a card-sorting task to sort conditions and medications for a fictional patient. We analyzed the spatial properties of the data using statistical methods. Most of the participants (58%) arranged the medications along a straight line (p < 0.001). They sorted medications by organ systems (Friedman’s χ 2(54) = 325.7, p < 0.001). These arrangements described the clinical correspondence between each two medications (Wilcoxon W = 192.0, p < 0.001). A cluster analysis showed that the subjects matched conditions and medications related to the same organ system together (Wilcoxon W = 1917.0, p < 0.001). We conclude that the clinicians commonly arranged the information into two groups (conditions and medications) and assigned an internal order within these groups, according to organ systems. They also matched between conditions and medications according to similar criteria. These findings were also supported by verbal protocol analysis. The findings strengthen the argument that organ-based information is pivotal to a clinician’s cognition during MR. Understanding the strategies and heuristics, clinicians employ through the MR process may help to develop practices to promote patient safety.  相似文献   
998.
The issues surrounding the question of atomicity, both in the past and nowadays, are briefly reviewed, and a picture of an ACID (atomic, consistent, isolated, durable) transaction as a refinement problem is presented. An example of a simple air traffic control system is introduced, and the discrepancies that can arise when read-only operations examine the state at atomic and finegrained levels are handled by retrenchment. Non-ACID timing aspects of the ATC example are also handled by retrenchment, and the treatment is generalised to yield the Retrenchment Atomicity Pattern. The utility of the pattern is confirmed against a number of different case studies. One is the Mondex Electronic Purse, its protocol treated as a conventional atomic transaction. Another is the recovery protocol of Mondex, viewed as a compensated transaction (leading to the view that compensated transactions in general fit the pattern). A final one comprises various unruly phenomena occurring in the implementations of software transactional memory systems, which can frequently display non-ACID behaviour. In all cases the Atomicity Pattern is seen to perform well.  相似文献   
999.
We recently introduced evolutive tandem repeats with jump (using Hamming distance) (Proc. MFCS’02: the 27th Internat. Symp. Mathematical Foundations of Computer Science, Warszawa, Otwock, Poland, August 2002, Lecture Notes in Computer Science, Vol. 2420, Springer, Berlin, pp. 292–304) which consist in a series of almost contiguous copies having the following property: the Hamming distance between two consecutive copies is always smaller than a given parameter e. In this article, we present a significative improvement that speeds up the detection of evolutive tandem repeats. It is based on the progressive computation of distances between candidate copies participating to the evolutive tandem repeat. It leads to a new algorithm, still quadratic in the worst case, but much more efficient on average, authorizing larger sequences to be processed.  相似文献   
1000.
We present a refinement calculus for transforming object-oriented (OO) specifications (or contracts) of classes into executable Eiffel programs. The calculus includes the usual collection of algorithmic refinement rules for assignments, if-statements, and loops. However, the calculus also deals with some of the specific challenges of OO, namely rules for introducing feature calls and reference types (involving aliasing). The refinement process is compositional in the sense that a class specification is refined to code based only on the specifications (not the implementations) of the classes that the specification depends upon. We discuss how automated support for such a process can be developed based on existing tools. This work is done in the context of a larger project involving methods for the seamless design of OO software in the graphical design notation BON (akin to UML). The goal is to maintain model and source code integrity, i.e., the software developer can work on either the model or the code, where (ideally) changes in one view are reflected instantaneously and automatically in all views.Correspondence and offprint request to: Richard F. Paige, Department of Computer Science, University of York, Heslington, York, YO10 5DD, United Kingdom. Email: paige@cs.york.ac.ukThe authors thank the National Sciences and Engineering Research Council of Canada for their support.Received January 2000 Accepted in revised August 2003 by B.C. Pierce  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号