全文获取类型
收费全文 | 5409篇 |
免费 | 171篇 |
国内免费 | 157篇 |
专业分类
电工技术 | 223篇 |
综合类 | 106篇 |
化学工业 | 93篇 |
金属工艺 | 266篇 |
机械仪表 | 759篇 |
建筑科学 | 78篇 |
矿业工程 | 22篇 |
能源动力 | 76篇 |
轻工业 | 11篇 |
水利工程 | 5篇 |
石油天然气 | 14篇 |
武器工业 | 24篇 |
无线电 | 594篇 |
一般工业技术 | 187篇 |
冶金工业 | 29篇 |
原子能技术 | 50篇 |
自动化技术 | 3200篇 |
出版年
2024年 | 5篇 |
2023年 | 26篇 |
2022年 | 42篇 |
2021年 | 57篇 |
2020年 | 58篇 |
2019年 | 41篇 |
2018年 | 71篇 |
2017年 | 120篇 |
2016年 | 102篇 |
2015年 | 157篇 |
2014年 | 268篇 |
2013年 | 223篇 |
2012年 | 303篇 |
2011年 | 409篇 |
2010年 | 191篇 |
2009年 | 295篇 |
2008年 | 275篇 |
2007年 | 322篇 |
2006年 | 331篇 |
2005年 | 325篇 |
2004年 | 302篇 |
2003年 | 220篇 |
2002年 | 186篇 |
2001年 | 147篇 |
2000年 | 135篇 |
1999年 | 147篇 |
1998年 | 147篇 |
1997年 | 123篇 |
1996年 | 116篇 |
1995年 | 105篇 |
1994年 | 87篇 |
1993年 | 63篇 |
1992年 | 62篇 |
1991年 | 43篇 |
1990年 | 54篇 |
1989年 | 32篇 |
1988年 | 44篇 |
1987年 | 12篇 |
1986年 | 9篇 |
1985年 | 21篇 |
1984年 | 14篇 |
1983年 | 10篇 |
1982年 | 6篇 |
1981年 | 4篇 |
1980年 | 6篇 |
1979年 | 4篇 |
1978年 | 4篇 |
1976年 | 4篇 |
1974年 | 3篇 |
1973年 | 4篇 |
排序方式: 共有5737条查询结果,搜索用时 0 毫秒
61.
Electron tomography (ET) combines electron microscopy and the principles of tomographic imaging in order to reconstruct the three-dimensional structure of complex biological specimens at molecular resolution. Weighted back-projection (WBP) has long been the method of choice since the reconstructions are very fast. It is well known that iterative methods produce better images, but at a very costly time penalty. In this work, it is shown that efficient parallel implementations of iterative methods, based primarily on data decomposition, can speed up such methods to an extent that they become viable alternatives to WBP. Precomputation of the coefficient matrix has also turned out to be important to substantially improve the performance regardless of the number of processors used. Matrix precomputation has made it possible to speed up the block-iterative component averaging (BICAV) algorithm, which has been studied before in the context of computerized tomography (CT) and ET, by a factor of more than 3.7. Component-averaged row projections (CARP) is a recently introduced block-parallel algorithm, which was shown to be a robust method for solving sparse systems arising from partial differential equations. It is shown that this algorithm is also suitable for single-axis ET, and is advantageous over BICAV both in terms of runtime and image quality. The experiments were carried out on several datasets of ET of various sizes, using the blob model for representing the reconstructed object. 相似文献
62.
A high performance algorithm for static task scheduling in heterogeneous distributed computing systems 总被引:2,自引:0,他引:2
Effective task scheduling is essential for obtaining high performance in heterogeneous distributed computing systems (HeDCSs). However, finding an effective task schedule in HeDCSs requires the consideration of both the heterogeneity of processors and high interprocessor communication overhead, which results from non-trivial data movement between tasks scheduled on different processors. In this paper, we present a new high-performance scheduling algorithm, called the longest dynamic critical path (LDCP) algorithm, for HeDCSs with a bounded number of processors. The LDCP algorithm is a list-based scheduling algorithm that uses a new attribute to efficiently select tasks for scheduling in HeDCSs. The efficient selection of tasks enables the LDCP algorithm to generate high-quality task schedules in a heterogeneous computing environment. The performance of the LDCP algorithm is compared to two of the best existing scheduling algorithms for HeDCSs: the HEFT and DLS algorithms. The comparison study shows that the LDCP algorithm outperforms the HEFT and DLS algorithms in terms of schedule length and speedup. Moreover, the improvement in performance obtained by the LDCP algorithm over the HEFT and DLS algorithms increases as the inter-task communication cost increases. Therefore, the LDCP algorithm provides a practical solution for scheduling parallel applications with high communication costs in HeDCSs. 相似文献
63.
We develop a novel approach for computing the circle Hough transform entirely on graphics hardware (GPU). A primary role is assigned to vertex processors and the rasterizer, overshadowing the traditional foreground of pixel processors and enhancing parallel processing. Resources like the vertex cache or blending units are studied too, with our set of optimizations leading to extraordinary peak gain factors exceeding 358x over a typical CPU execution. Software optimizations, like the use of precomputed tables or gradient information and hardware improvements, like hyperthreading and multicores are explored on CPUs as well. Overall, the GPU exhibits better scalability and much greater parallel performance to become a solid alternative for computing the classical circle Hough transform versus those optimal methods run on emerging multicore architectures. 相似文献
64.
We study the problem of one-dimensional partitioning of nonuniform workload arrays, with optimal load balancing for heterogeneous systems. We look at two cases: chain-on-chain partitioning, where the order of the processors is specified, and chain partitioning, where processor permutation is allowed. We present polynomial time algorithms to solve the chain-on-chain partitioning problem optimally, while we prove that the chain partitioning problem is NP-complete. Our empirical studies show that our proposed exact algorithms produce substantially better results than heuristics, while solution times remain comparable. 相似文献
65.
W.M. Charles E. van den BergAuthor Vitae H.X. LinAuthor VitaeA.W. HeeminkAuthor Vitae M. Verlaan 《Journal of Parallel and Distributed Computing》2008
This paper describes the parallel simulation of sediment dynamics in shallow water. By using a Lagrangian model, the problem is transformed to one in which a large number of independent particles must be tracked. This results in a technique that can be parallelised with high efficiency. We have developed a sediment transport model using three different sediment suspension methods. The first method uses a modified mean for the Poisson distribution function to determine the expected number of the suspended particles in each particular grid cell of the domain over all available processors. The second method determines the number of particles to suspend with the aid of the Poisson distribution function only in those grid cells which are assigned to that processor. The third method is based on the technique of using a synchronised pseudo-random-number generator to generate identical numbers of suspended particles in all valid grid cells for each processor. Parallel simulation experiments are performed in order to investigate the efficiency of these three methods. Also the parallel performance of the implementations is analysed. We conclude that the second method is the best method on distributed computing systems (e.g., a Beowulf cluster), whereas the third maintains the best load distribution. 相似文献
66.
Scalability is a key factor of the design of distributed systems and parallel algorithms and machines. However, conventional scalabilities are designed for homogeneous parallel processing. There is no suitable and commonly accepted definition of scalability metric for heterogeneous systems. Isospeed scalability is a well-defined metric for homogeneous computing. This study extends the isospeed scalability metric to general heterogeneous computing systems. The proposed isospeed-efficiency model is suitable for both homogeneous and heterogeneous computing. Through theoretical analyses, we derive methodologies of scalability measurement and prediction for heterogeneous systems. Experimental results have verified the analytical results and confirmed that the proposed isospeed-efficiency scalability works well in both homogeneous and heterogeneous environments. 相似文献
67.
Aminollah Mahabadi Hamid Sarbazi-Azad Ebrahim Khodaie Keivan Navi 《The Journal of supercomputing》2008,45(1):1-14
This paper proposes an efficient parallel algorithm for computing Lagrange interpolation on k-ary n-cube networks. This is done using the fact that a k-ary n-cube can be decomposed into n link-disjoint Hamiltonian cycles. Using these n link-disjoint cycles, we interpolate Lagrange polynomial using full bandwidth of the employed network. Communication in the
main phase of the algorithm is based on an all-to-all broadcast algorithm on the n link-disjoint Hamiltonian cycles exploiting all network channels, and thus, resulting in high-efficiency in using network
resources. A performance evaluation of the proposed algorithm reveals an optimum speedup for a typical range of system parameters
used in current state-of-the-art implementations.
相似文献
Hamid Sarbazi-AzadEmail: Email: |
68.
Sang Boem Lim Hanku Lee Bryan Carpenter Geoffrey Fox 《The Journal of supercomputing》2008,43(2):165-182
The paper research is concerned with enabling parallel, high-performance computation—in particular development of scientific
software in the network-aware programming language, Java. Traditionally, this kind of computing was done in Fortran. Arguably,
Fortran is becoming a marginalized language, with limited economic incentive for vendors to produce modern development environments,
optimizing compilers for new hardware, or other kinds of associated software expected of by today’s programmers. Hence, Java
looks like a very promising alternative for the future.
The paper will discuss in detail a particular environment called HPJava. HPJava is the environment for parallel programming—especially data-parallel scientific programming—in Java. Our HPJava is
based around a small set of language extensions designed to support parallel computation with distributed arrays, plus a set
of communication libraries. A high-level communication API, Adlib, is developed as an application level communication library suitable for our HPJava. This communication library supports
collective operations on distributed arrays. We include Java Object as one of the Adlib communication data types. So we fully support communication of intrinsic Java types, including primitive
types, and Java object types. 相似文献
69.
This paper presents the development of the planar bipedal robot ERNIE as well as numerical and experimental studies of the
influence of parallel knee joint compliance on the energetic efficiency of walking in ERNIE. ERNIE has 5 links—a torso, two
femurs and two tibias—and is configured to walk on a treadmill so that it can walk indefinitely in a confined space. Springs
can be attached across the knee joints in parallel with the knee actuators. The hybrid zero dynamics framework serves as the
basis for control of ERNIE’s walking. In the investigation of the effects of compliance on the energetic efficiency of walking,
four cases were studied: one without springs and three with springs of different stiffnesses and preloads. It was found that
for low-speed walking, the addition of soft springs may be used to increase energetic efficiency, while stiffer springs decrease
the energetic efficiency. For high-speed walking, the addition of either soft or stiff springs increases the energetic efficiency
of walking, while stiffer springs improve the energetic efficiency more than do softer springs.
Electronic Supplementary Material The online version of this article () contains supplementary material, which is available to authorized users.
相似文献
R. A. BockbraderEmail: |
70.
Michael Creel 《Computational Economics》2008,32(4):343-352
Solving nonlinear macroeconomic models with rational expectations can be time-consuming. This paper shows how the parameterized
expectations algorithm (PEA) can be parallelized to reduce the time needed to solve a simple model by more than 80%. The general
idea of using parallelization applies naturally to other algorithms, as well. This paper is illustrative of the speedup that
can be obtained, and it provides computer code that may serve as an example for parallelization of other algorithms. For those
who would like to use the parallelized PEA, the implementation does not confront end users with the details of parallelization.
To solve a model, it is only necessary to provide ordinary serial code that simulates data from the model. All needed code
is available, on a standalone basis, or pre-installed on ParallelKnoppix (Creel, J Appl Economet 22:215–223, 2007).
相似文献