全文获取类型
收费全文 | 60484篇 |
免费 | 2715篇 |
国内免费 | 210篇 |
专业分类
电工技术 | 927篇 |
综合类 | 103篇 |
化学工业 | 11655篇 |
金属工艺 | 2223篇 |
机械仪表 | 3338篇 |
建筑科学 | 1324篇 |
矿业工程 | 52篇 |
能源动力 | 2336篇 |
轻工业 | 4361篇 |
水利工程 | 263篇 |
石油天然气 | 210篇 |
武器工业 | 1篇 |
无线电 | 10648篇 |
一般工业技术 | 12047篇 |
冶金工业 | 5964篇 |
原子能技术 | 697篇 |
自动化技术 | 7260篇 |
出版年
2023年 | 574篇 |
2022年 | 905篇 |
2021年 | 1522篇 |
2020年 | 1094篇 |
2019年 | 1122篇 |
2018年 | 1472篇 |
2017年 | 1452篇 |
2016年 | 1820篇 |
2015年 | 1428篇 |
2014年 | 2206篇 |
2013年 | 3744篇 |
2012年 | 3455篇 |
2011年 | 4229篇 |
2010年 | 3225篇 |
2009年 | 3430篇 |
2008年 | 3127篇 |
2007年 | 2613篇 |
2006年 | 2397篇 |
2005年 | 2064篇 |
2004年 | 1969篇 |
2003年 | 1795篇 |
2002年 | 1734篇 |
2001年 | 1349篇 |
2000年 | 1262篇 |
1999年 | 1235篇 |
1998年 | 2264篇 |
1997年 | 1486篇 |
1996年 | 1240篇 |
1995年 | 994篇 |
1994年 | 742篇 |
1993年 | 693篇 |
1992年 | 505篇 |
1991年 | 504篇 |
1990年 | 428篇 |
1989年 | 412篇 |
1988年 | 330篇 |
1987年 | 282篇 |
1986年 | 257篇 |
1985年 | 237篇 |
1984年 | 201篇 |
1983年 | 152篇 |
1982年 | 152篇 |
1981年 | 132篇 |
1980年 | 133篇 |
1979年 | 103篇 |
1978年 | 94篇 |
1977年 | 124篇 |
1976年 | 159篇 |
1975年 | 81篇 |
1974年 | 75篇 |
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
991.
Genetic Parallel Programming: design and implementation 总被引:1,自引:0,他引:1
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially. 相似文献
992.
Lee B Parr CS Plaisant C Bederson BB Veksler VD Gray WD Kotfila C 《IEEE transactions on visualization and computer graphics》2006,12(6):1414-1426
Despite extensive research, it is still difficult to produce effective interactive layouts for large graphs. Dense layout and occlusion make food Webs, ontologies and social networks difficult to understand and interact with. We propose a new interactive visual analytics component called TreePlus that is based on a tree-style layout. TreePlus reveals the missing graph structure with visualization and interaction while maintaining good readability. To support exploration of the local structure of the graph and gathering of information from the extensive reading of labels, we use a guiding metaphor of "plant a seed and watch it grow." It allows users to start with a node and expand the graph as needed, which complements the classic overview techniques that can be effective at (but often limited to) revealing clusters. We describe our design goals, describe the interface and report on a controlled user study with 28 participants comparing TreePlus with a traditional graph interface for six tasks. In general, the advantage of TreePlus over the traditional interface increased as the density of the displayed data increased. Participants also reported higher levels of confidence in their answers with TreePlus and most of them preferred TreePlus 相似文献
993.
Uk Jung Myong K Jeong Jye-Chyi Lu 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2006,36(5):1128-1138
Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures. 相似文献
994.
Although the stringent requirements of some critical applications may require independent certification, the authors see software developer self-certification as a viable alternative in many other cases. They accept that using software certification laboratories (SCLs) may work well for certain software distribution models, but they cannot be applied to all types of software development. The approach has several drawbacks. For example, an SCL may work well for larger software houses that ship mass-marketed software applications to the public, but it is less satisfactory for smaller developers who make reusable components or safety-critical software or for developers who belong to the freeware community 相似文献
995.
Parameter-free geometric document layout analysis 总被引:1,自引:0,他引:1
Seong-Whan Lee Dae-Seok Ryu 《IEEE transactions on pattern analysis and machine intelligence》2001,23(11):1240-1256
Automatic transformation of paper documents into electronic documents requires geometric document layout analysis at the first stage. However, variations in character font sizes, text line spacing, and document layout structures have made it difficult to design a general-purpose document layout analysis algorithm for many years. The use of some parameters has therefore been unavoidable in previous methods. The authors propose a parameter-free method for segmenting the document images into maximal homogeneous regions and identifying them as texts, images, tables, and ruling lines. A pyramidal quadtree structure is constructed for multiscale analysis and a periodicity measure is suggested to find a periodical attribute of text regions for page segmentation. To obtain robust page segmentation results, a confirmation procedure using texture analysis is applied to only ambiguous regions. Based on the proposed periodicity measure, multiscale analysis, and confirmation procedure, we could develop a robust method for geometric document layout analysis independent of character font sizes, text line spacing, and document layout structures. The proposed method was experimented with the document database from the University of Washington and the MediaTeam Document Database. The results of these tests have shown that the proposed method provides more accurate results than previous ones 相似文献
996.
We describe the parallelization of a first-order logic theorem prover that is based on the hyper-linking proof procedure (HLPP). Four parallel schemes – process level, clause level, literal level, and flow level – are developed for two types of sequential implementation of HLPP: list based and network based. The motivation for developing each parallel scheme is presented, and the architecture and implementation details of each scheme are described. Issues about parallel processing, such as serialization and synchronization, load balancing, and access conflicts, are examined. Speedups over sequential implementations are attained, and timing results for benchmark problems are provided. 相似文献
997.
Efficient Graph-Based Algorithms for Discovering and Maintaining Association Rules in Large Databases 总被引:4,自引:2,他引:2
In this paper, we study the issues of mining and maintaining association rules in a large database of customer transactions.
The problem of mining association rules can be mapped into the problems of finding large itemsets which are sets of items brought together in a sufficient number of transactions. We revise a graph-based algorithm to further
speed up the process of itemset generation. In addition, we extend our revised algorithm to maintain discovered association
rules when incremental or decremental updates are made to the databases. Experimental results show the efficiency of our algorithms.
The revised algorithm is a significant improvement over the original one on mining association rules. The algorithms for maintaining
association rules are more efficient than re-running the mining algorithms for the whole updated database and outperform previously
proposed algorithms that need multiple passes over the database.
Received 4 August 1999 / Revised 18 March 2000 / Accepted in revised form 18 October 2000 相似文献
998.
Recently, High Performance Computing (HPC) platforms have been employed to realize many computationally demanding applications
in signal and image processing. These applications require real-time performance constraints to be met. These constraints
include latency as well as throughput. In order to meet these performance requirements, efficient parallel algorithms are
needed. These algorithms must be engineered to exploit the computational characteristics of such applications.
In this paper we present a methodology for mapping a class of adaptive signal processing applications onto HPC platforms
such that the throughput performance is optimized. We first define a new task model using the salient computational characteristics
of a class of adaptive signal processing applications. Based on this task model, we propose a new execution model. In the
earlier linear pipelined execution model, the task mapping choices were restricted. The new model permits flexible task mapping
choices, leading to improved throughput performance compared with the previous model. Using the new model, a three-step task
mapping methodology is developed. It consists of (1) a data remapping step, (2) a coarse resource allocation step, and (3)
a fine performance tuning step. The methodology is demonstrated by designing parallel algorithms for modern radar and sonar
signal processing applications. These are implemented on IBM SP2 and Cray T3E, state-of-the-art HPC platforms, to show the
effectiveness of our approach. Experimental results show significant performance improvement over those obtained by previous
approaches. Our code is written using C and the Message Passing Interface (MPI). Thus, it is portable across various HPC platforms.
Received April 8, 1998; revised February 2, 1999. 相似文献
999.
R Ottman JH Lee WA Hauser S Hong D Hesdorffer N Schupf TA Pedley ML Scheuer 《Canadian Metallurgical Quarterly》1993,43(12):2526-2530
Methods for standardized classification of epileptic seizures are important for both clinical practice and epidemiologic research. In this study, we developed a strategy for standardized classification using a semistructured telephone interview and operational diagnostic criteria. We interviewed 1,957 adults with epilepsy ascertained from voluntary organizations. To confirm and expand the seizure history, we also interviewed a first-degree relative for 67% of subjects and obtained medical records for 59%. Three lay reviewers used all available information to classify seizures. To assess reliability, each reviewer classified a sample of subjects assigned to the others. In addition, an expert physician classified a sample of subjects assigned to two of the reviewers. Agreement was "moderate-substantial" for generalized-onset seizures, both for the comparisons between pairs of lay reviewers and for the neurologist versus lay reviewers. Agreement was "substantial-almost perfect" for partial-onset seizures, both for pairs of lay reviewers and for the neurologist versus lay reviewers. These results suggest that seizures can be reliably classified by lay reviewers, using operational criteria applied to symptoms ascertained in a semistructured telephone interview. 相似文献
1000.
Deformed high molecular weight polyethylene (HMWPE) rod, formed by die drawing at 115C, was cleaved longitudinally at liquid nitrogen temperature and the cleaved surface was etched by the permanganic etching technique, A series of etched surfaces of HMWPE sections of variable draw ratio (1–13) was analysed by scanning electron microscopy (SEM), The evolution of crystalline structure in HMWPE during die drawing was observed directly. In undrawn HMWPE, the spherulites were made up of sheaf-like lamellae and scattered within an amorphous phase. During die drawing, first, microscopically inhomogeneous deformation occurred and the spherulites aligned along the drawing direction; then at a draw ratio of about 7, local melting occurred, the spherulites disintegrated and the sheaf-like lamellae oriented, followed by strain-induced recrystallization and the growth of the lamellae; finally, at a draw ratio of about 12, plastic deformation of the lamellae occurred and microfibrils were formed by drawing the lamellae. 相似文献