全文获取类型
收费全文 | 54672篇 |
免费 | 1800篇 |
国内免费 | 286篇 |
专业分类
电工技术 | 742篇 |
综合类 | 686篇 |
化学工业 | 7776篇 |
金属工艺 | 876篇 |
机械仪表 | 1380篇 |
建筑科学 | 1281篇 |
矿业工程 | 415篇 |
能源动力 | 1150篇 |
轻工业 | 3130篇 |
水利工程 | 850篇 |
石油天然气 | 230篇 |
武器工业 | 10篇 |
无线电 | 3028篇 |
一般工业技术 | 5413篇 |
冶金工业 | 21692篇 |
原子能技术 | 274篇 |
自动化技术 | 7825篇 |
出版年
2024年 | 103篇 |
2023年 | 419篇 |
2022年 | 516篇 |
2021年 | 847篇 |
2020年 | 704篇 |
2019年 | 885篇 |
2018年 | 1430篇 |
2017年 | 1574篇 |
2016年 | 1930篇 |
2015年 | 1307篇 |
2014年 | 1314篇 |
2013年 | 1770篇 |
2012年 | 3015篇 |
2011年 | 3364篇 |
2010年 | 1269篇 |
2009年 | 1290篇 |
2008年 | 912篇 |
2007年 | 869篇 |
2006年 | 743篇 |
2005年 | 3455篇 |
2004年 | 2667篇 |
2003年 | 2109篇 |
2002年 | 909篇 |
2001年 | 768篇 |
2000年 | 307篇 |
1999年 | 654篇 |
1998年 | 6166篇 |
1997年 | 3822篇 |
1996年 | 2525篇 |
1995年 | 1474篇 |
1994年 | 1077篇 |
1993年 | 1116篇 |
1992年 | 256篇 |
1991年 | 322篇 |
1990年 | 321篇 |
1989年 | 289篇 |
1988年 | 299篇 |
1987年 | 226篇 |
1986年 | 207篇 |
1985年 | 174篇 |
1984年 | 88篇 |
1983年 | 92篇 |
1982年 | 136篇 |
1981年 | 178篇 |
1980年 | 193篇 |
1979年 | 66篇 |
1978年 | 101篇 |
1977年 | 610篇 |
1976年 | 1320篇 |
1975年 | 98篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
An open-source software including an easy-to-use graphical user interface (GUI) has been developed for processing, modeling and mapping of gravity and magnetic data. The program, called Potensoft, is a set of functions written in MATLAB. The most common application of Potensoft is spatial and frequency domain filtering of gravity and magnetic data. The GUI helps the user easily change all the required parameters. One of the major advantages of the program is to display the input and processed maps in a preview window, thereby allowing the user to track the results during the ongoing process. Source codes can be modified depending on the users' goals. This paper discusses the main features of the program and its capabilities are demonstrated by means of illustrative examples. The main objective is to introduce and ensure usage of the developed package for academic, teaching and professional purposes. 相似文献
992.
Sudip Misra Manikonda Pavan Kumar Mohammad S. Obaidat 《Computer Communications》2011,34(12):1484-1496
Efficient network coverage and connectivity are the requisites for most Wireless Sensor Network (WSN) deployments, particularly those concerned with area monitoring. Due to the resource constraints of the sensor nodes, redundancy of coverage area must be reduced for effective utilization of the available resources. If two nodes have the same coverage area in their active state, and if both the nodes are activated simultaneously, it leads to redundancy in network and wastage of precious sensor resources. In this paper, we address the problem of network coverage and connectivity and propose an efficient solution to maintain coverage, while preserving the connectivity of the network. The proposed solution aims to cover the area of interest (AOI), while minimizing the count of the active sensor nodes. The overlap region of two sensor nodes varies with the distance between the nodes. If the distance between two sensor nodes is maximized, the overall coverage area of these nodes will also be maximized. Also, to preserve the connectivity of the network, each sensor node must be in the communication range of at least one other node. Results of simulation of the proposed solution indicate up to 95% coverage of the area, while consuming very less energy of 9.44 J per unit time in the network, simulated in an area of 2500 m2. 相似文献
993.
A. Bosque V. Viñals P. Ibáñez J.M. Llaber?´aAuthor vitae 《Microprocessors and Microsystems》2011,35(8):695-707
Coherence protocols consume an important fraction of power to determine which coherence action to perform. Specifically, on CMPs with shared cache and directory-based coherence protocol implemented as a duplicate of local caches tags, we have observed that a big fraction of directory lookups cause a miss, because the block looked up is not allocated in any local cache. To reduce the number of directory lookups and therefore the power consumption, we propose to add a filter before the directory access.We introduce two filter implementations. In the first one, filtering information is explicitly kept in the shared cache for every block. In the second one, filtering information is decoupled from the shared cache organization, so the filter size does not depend on the shared cache size.We evaluate our filters in a CMP with 8 in-order processors with 4 threads each and a memory hierarchy with write-through local caches and a shared cache. We show that, for SPLASH2 benchmarks, the proposed filters reduce the number of directory lookups performed by 60% while power consumption is reduced by ∼28%. For Specweb2005, the number of directory lookups performed is reduced by 68% (44%), while directory power consumption is reduced by 19% (9%) using the first (second) filter implementation. 相似文献
994.
N. Beneš L. Brim B. Buhnova I. ?erná J. Sochor P. Va?eková 《Science of Computer Programming》2011,76(10):877-890
Software systems assembled from a large number of autonomous components become an interesting target for formal verification due to the issue of correct interplay in component interaction. State/event LTL (Chaki et al. (2004, 2005) [1] and [2]) incorporates both states and events to express important properties of component-based software systems.The main contribution of this paper is a partial order reduction technique for verification of state/event LTL properties. The core of the partial order reduction is a novel notion of stuttering equivalence which we call state/event stuttering equivalence. The positive attribute of the equivalence is that it can be resolved with existing methods for partial order reduction. State/event LTL properties are, in general, not preserved under state/event stuttering equivalence. To this end we define a new logic, called weak state/event LTL, which is invariant under the new equivalence.To bring some evidence of the method’s efficiency, we present some of the results obtained by employing the partial order reduction technique within our tool for verification of component-based systems modelled using the formalism of component-interaction automata (Brim et al. (2005) [3]). 相似文献
995.
Flash memory efficient LTL model checking 总被引:1,自引:0,他引:1
S. EdelkampD. Sulewski J. BarnatL. Brim P. Šime?ek 《Science of Computer Programming》2011,76(2):136-157
As the capacity and speed of flash memories in form of solid state disks grow, they are becoming a practical alternative for standard magnetic drives. Currently, most solid-state disks are based on NAND technology and much faster than magnetic disks in random reads, while in random writes they are generally not.So far, large-scale LTL model checking algorithms have been designed to employ external memory optimized for magnetic disks. We propose algorithms optimized for flash memory access. In contrast to approaches relying on the delayed detection of duplicate states, in this work, we design and exploit appropriate hash functions to re-invent immediate duplicate detection.For flash memory efficient on-the-fly LTL model checking, which aims at finding any counter-example to the specified LTL property, we study hash functions adapted to the two-level hierarchy of RAM and flash memory. For flash memory efficient off-line LTL model checking, which aims at generating a minimal counterexample and scans the entire state space at least once, we analyze the effect of outsourcing a memory-based perfect hash function from RAM to flash memory.Since the characteristics of flash memories are different to magnetic hard disks, the existing I/O complexity model is no longer sufficient. Therefore, we provide an extended model for the computation of the I/O complexity adapted to flash memories that has a better fit to the observed behavior of our algorithms. 相似文献
996.
997.
Auction processes are commonly employed in many environments. With rapid advances in Internet and computing technologies, electronic auctions have become very popular. People sell and buy a wide range of goods and services online. There is a growing need for the proper management of online auctions and for providing support to parties involved. In this paper, we develop an interactive approach supporting both the buyer and the bidders in a multi-attribute, single-item, multi-round, reverse auction environment. We demonstrate the algorithm on a number of problems. 相似文献
998.
In this paper we present a new thermographic image database, suitable for the analysis of automatic focusing measures. This database contains the images of 10 scenes, each of which is represented once for each of 96 different focus positions. Using this database, we evaluate the usefulness of five focus measures with the goal of determining the optimal focus position. Experimental results reveal that the accurate automatic detection of optimal focus position can be achieved with a low computational burden. We also present an acquisition tool for obtaining thermal images. To the best of our knowledge, this is the first study on the automatic focusing of thermal images. 相似文献
999.
Ma?gorzata ?ak-Szatkowska Ma?gorzata Bogdan 《Computational statistics & data analysis》2011,55(11):2908-2924
The classical model selection criteria, such as the Bayesian Information Criterion (BIC) or Akaike information criterion (AIC), have a strong tendency to overestimate the number of regressors when the search is performed over a large number of potential explanatory variables. To handle the problem of the overestimation, several modifications of the BIC have been proposed. These versions rely on supplementing the original BIC with some prior distributions on the class of possible models. Three such modifications are presented and compared in the context of sparse Generalized Linear Models (GLMs). The related choices of priors are discussed and the conditions for the asymptotic equivalence of these criteria are provided. The performance of the modified versions of the BIC is illustrated with an extensive simulation study and a real data analysis. Also, simplified versions of the modified BIC, based on least squares regression, are investigated. 相似文献
1000.
This work introduces a new algorithm for surface reconstruction in ℝ3 from spatially arranged one-dimensional cross sections embedded in ℝ3. This is generally the case with acoustic signals that pierce an object non-destructively. Continuous deformations (homotopies)
that smoothly reconstruct information between any pair of successive cross sections are derived. The zero level set of the
resulting homotopy field generates the desired surface. Four types of homotopies are suggested that are well suited to generate
a smooth surface. We also provide derivation of necessary higher order homotopies that can generate a C
2 surface. An algorithm to generate surface from acoustic sonar signals is presented with results. Reconstruction accuracies
of the homotopies are compared by means of simulations performed on basic geometric primitives. 相似文献