全文获取类型
收费全文 | 92025篇 |
免费 | 1584篇 |
国内免费 | 427篇 |
专业分类
电工技术 | 887篇 |
综合类 | 2332篇 |
化学工业 | 13649篇 |
金属工艺 | 4928篇 |
机械仪表 | 3209篇 |
建筑科学 | 2470篇 |
矿业工程 | 598篇 |
能源动力 | 1320篇 |
轻工业 | 3960篇 |
水利工程 | 1333篇 |
石油天然气 | 369篇 |
无线电 | 9953篇 |
一般工业技术 | 18421篇 |
冶金工业 | 3591篇 |
原子能技术 | 330篇 |
自动化技术 | 26686篇 |
出版年
2023年 | 153篇 |
2022年 | 374篇 |
2021年 | 418篇 |
2020年 | 287篇 |
2019年 | 249篇 |
2018年 | 14665篇 |
2017年 | 13560篇 |
2016年 | 10237篇 |
2015年 | 893篇 |
2014年 | 653篇 |
2013年 | 731篇 |
2012年 | 3655篇 |
2011年 | 9964篇 |
2010年 | 8655篇 |
2009年 | 5921篇 |
2008年 | 7162篇 |
2007年 | 8131篇 |
2006年 | 431篇 |
2005年 | 1451篇 |
2004年 | 1336篇 |
2003年 | 1358篇 |
2002年 | 720篇 |
2001年 | 212篇 |
2000年 | 284篇 |
1999年 | 172篇 |
1998年 | 245篇 |
1997年 | 184篇 |
1996年 | 149篇 |
1995年 | 91篇 |
1994年 | 84篇 |
1993年 | 98篇 |
1992年 | 65篇 |
1991年 | 73篇 |
1990年 | 50篇 |
1989年 | 37篇 |
1988年 | 53篇 |
1987年 | 40篇 |
1985年 | 44篇 |
1984年 | 37篇 |
1976年 | 64篇 |
1968年 | 50篇 |
1967年 | 40篇 |
1966年 | 55篇 |
1965年 | 49篇 |
1959年 | 36篇 |
1958年 | 42篇 |
1957年 | 38篇 |
1956年 | 36篇 |
1955年 | 67篇 |
1954年 | 69篇 |
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
Tomohiro Hashiguchi Yutaka Takita Kazuyuki Tajima Toru Katagiri 《Journal of Network and Systems Management》2016,24(3):557-577
Shared protection/restoration is a promising solution for reducing protection resources and is supported at each layer of the current multi-layer networks. Software-defined networking is expected to reduce equipment cost as well as operational cost by orchestrating these shared protection functionalities. However, although protection resource sharing improves link utilization, it sometimes increases the required equipment. Meanwhile, traffic re-aggregation at each layer is an important technique for low volume traffic to utilize the underlying link capacity more efficiently, but re-aggregation also makes it difficult to share protection resources with traffic at lower layers. In this paper, we present multi-layer network design strategy and method that reduce equipment cost by means of both traffic re-aggregation at each layer and protection resource sharing among multiple service traffic at different layers. The strategy first prioritizes traffic re-aggregation at each layer, and then maximally delegates shared protection to lower layers as long as it does not increase the required capacity at the lower layer. Evaluation results from the example three-layer networks confirm that the proposed method can effectively reduce equipment cost compared to the conventional design method. Cost reduction is achieved by leveraging shared protection functions at multiple layers. 相似文献
992.
Velin Kounev Martin Lévesque David Tipper Teresa Gomes 《Journal of Network and Systems Management》2016,24(3):629-652
In this paper, we discuss the communications reliability requirements posed by the smart power grid with a focus on communications in support of wide area situational awareness. Implementation of wide area situational awareness relies on both transmission substation networks and wide area optical networks. We study the reliability of a sample communications network of the California Power Grid and find that its reliability falls short of proposed requirements. To overcome this issue, we consider the problem of designing the substation network and the wide area network to meet the reliability requirements while minimizing the network cost. For the wide area network design problem, we propose two alternate design approaches, namely: (1) following the power lines and (2) a mesh based design interconnecting the nodes. For the first approach we develop two greedy iterative heuristics and a heuristic integer linear programming (H-ILP) model using minimum cut-sets for network reliability optimization. The greedy iterative algorithms outperform the H-ILP approach in terms of cost, but require a larger amount of computing resources. Both proposed models are in fact complementary and thus provide a framework to optimize the reliability of smart grid communications networks restricted to following the power lines. In the second approach a greenfield mesh network method is proposed based on starting with a minimum spanning tree which is then augmented through a greedy heuristic into a mesh. Comparative numerical results show that the reliable mesh design has advantages in terms of the number of links and total link distance needed. 相似文献
993.
Mohammad Sadoghi Kenneth A. Ross Mustafa Canim Bishwaranjan Bhattacharjee 《The VLDB Journal The International Journal on Very Large Data Bases》2016,25(5):651-672
Multiversion databases store both current and historical data. Rows are typically annotated with timestamps representing the period when the row is/was valid. We develop novel techniques to reduce index maintenance in multiversion databases, so that indexes can be used effectively for analytical queries over current data without being a heavy burden on transaction throughput. To achieve this end, we re-design persistent index data structures in the storage hierarchy to employ an extra level of indirection. The indirection level is stored on solid-state disks that can support very fast random I/Os, so that traversing the extra level of indirection incurs a relatively small overhead. The extra level of indirection dramatically reduces the number of magnetic disk I/Os that are needed for index updates and localizes maintenance to indexes on updated attributes. Additionally, we batch insertions within the indirection layer in order to reduce physical disk I/Os for indexing new records. In this work, we further exploit SSDs by introducing novel DeltaBlock techniques for storing the recent changes to data on SSDs. Using our DeltaBlock, we propose an efficient method to periodically flush the recently changed data from SSDs to HDDs such that, on the one hand, we keep track of every change (or delta) for every record, and, on the other hand, we avoid redundantly storing the unchanged portion of updated records. By reducing the index maintenance overhead on transactions, we enable operational data stores to create more indexes to support queries. We have developed a prototype of our indirection proposal by extending the widely used generalized search tree open-source project, which is also employed in PostgreSQL. Our working implementation demonstrates that we can significantly reduce index maintenance and/or query processing cost by a factor of 3. For the insertion of new records, our novel batching technique can save up to 90 % of the insertion time. For updates, our prototype demonstrates that we can significantly reduce the database size by up to 80 % even with a modest space allocated for DeltaBlocks on SSDs. 相似文献
994.
Stephan Baumann Peter Boncz Kai-Uwe Sattler 《The VLDB Journal The International Journal on Very Large Data Bases》2016,25(3):291-316
Analytical workloads in data warehouses often include heavy joins where queries involve multiple fact tables in addition to the typical star-patterns, dimensional grouping and selections. In this paper we propose a new processing and storage framework called bitwise dimensional co-clustering (BDCC) that avoids replication and thus keeps updates fast, yet is able to accelerate all these foreign key joins, efficiently support grouping and pushes down most dimensional selections. The core idea of BDCC is to cluster each table on a mix of dimensions, each possibly derived from attributes imported over an incoming foreign key and this way creating foreign key connected tables with partially shared clusterings. These are later used to accelerate any join between two tables that have some dimension in common and additionally permit to push down and propagate selections (reduce I/O) and accelerate aggregation and ordering operations. Besides the general framework, we describe an algorithm to derive such a physical co-clustering database automatically and describe query processing and query optimization techniques that can easily be fitted into existing relational engines. We present an experimental evaluation on the TPC-H benchmark in the Vectorwise system, showing that co-clustering can significantly enhance its already high performance and at the same time significantly reduce the memory consumption of the system. 相似文献
995.
Gheorghi Guzun Guadalupe Canahuate 《The VLDB Journal The International Journal on Very Large Data Bases》2016,25(3):339-354
Bit-vectors are widely used for indexing and summarizing data due to their efficient processing in modern computers. Sparse bit-vectors can be further compressed to reduce their space requirement. Special compression schemes based on run-length encoders have been designed to avoid explicit decompression and minimize the decoding overhead during query execution. Moreover, highly compressed bit-vectors can exhibit a faster query time than the non-compressed ones. However, for hard-to-compress bit-vectors, compression does not speed up queries and can add considerable overhead. In these cases, bit-vectors are often stored verbatim (non-compressed). On the other hand, queries are answered by executing a cascade of bit-wise operations involving indexed bit-vectors and intermediate results. Often, even when the original bit-vectors are hard to compress, the intermediate results become sparse. It could be feasible to improve query performance by compressing these bit-vectors as the query is executed. In this scenario, it would be necessary to operate verbatim and compressed bit-vectors together. In this paper, we propose a hybrid framework where compressed and verbatim bitmaps can coexist and design algorithms to execute queries under this hybrid model. Our query optimizer is able to decide at run time when to compress or decompress a bit-vector. Our heuristics show that the applications using higher-density bitmaps can benefit from using this hybrid model, improving both their query time and memory utilization. 相似文献
996.
Razen Harbi Ibrahim Abdelaziz Panos Kalnis Nikos Mamoulis Yasser Ebrahim Majed Sahli 《The VLDB Journal The International Journal on Very Large Data Bases》2016,25(3):355-380
State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds. 相似文献
997.
998.
Yaron Kanza Hadas Yaari 《The VLDB Journal The International Journal on Very Large Data Bases》2016,25(4):495-518
This paper studies the problem of how to conduct external sorting on flash drives while avoiding intermediate writes to the disk. The focus is on sort in portable electronic devices, where relations are only larger than the main memory by a small factor, and on sort as part of distributed processes where relations are frequently partially sorted. In such cases, sort algorithms that refrain from writing intermediate results to the disk have three advantages over algorithms that perform intermediate writes. First, on devices in which read operations are much faster than writes, such methods are efficient and frequently outperform Merge Sort. Secondly, they reduce flash cell degradation caused by writes. Thirdly, they can be used in cases where there is not enough disk space for the intermediate results. Novel sort algorithms that avoid intermediate writes to the disk are presented. An experimental evaluation, on different flash storage devices, shows that in many cases the new algorithms can extend the lifespan of the devices by avoiding unnecessary writes to the disk, while maintaining efficiency, in comparison with Merge Sort. 相似文献
999.
Ivan Caravela Artur Arsenio Nuno Borges 《Journal of Network and Systems Management》2016,24(4):974-1003
As telecommunication networks grow in size and complexity, monitoring systems need to scale up accordingly. Alarm data generated in a large network are often highly correlated. These correlations can be explored to simplify the process of network fault management, by reducing the number of alarms presented to the network-monitoring operator. This makes it easier to react to network failures. But in some scenarios, it is highly desired to prevent the occurrence of these failures by predicting the occurrence of alarms before hand. This work investigates the usage of data mining methods to generate knowledge from historical alarm data, and using such knowledge to train a machine learning system, in order to predict the occurrence of the most relevant alarms in the network. The learning system was designed to be retrained periodically in order to keep an updated knowledge base. 相似文献
1000.
The ideal of Bessel-Fourier moments (BFMs) for image analysis and only rotation invariant image cognition has been proposed recently. In this paper, we extend the previous work and propose a new method for rotation, scaling and translation (RST) invariant texture recognition using Bessel-Fourier moments. Compared with the others moments based methods, the radial polynomials of Bessel-Fourier moments have more zeros and these zeros are more evenly distributed. It makes Bessel-Fourier moments more suitable for invariant texture recognition as a generalization of orthogonal complex moments. In the experiment part, we got three testing sets of 16, 24 and 54 texture images by way of translating, rotating and scaling them separately. The correct classification percentages (CCPs) are compared with that of orthogonal Fourier-Mellin moments and Zernike moments based methods in both noise-free and noisy condition. Experimental results validate the conclusion of theoretical derivation: BFM performs better in recognition capability and noise robustness in terms of RST texture recognition under both noise-free and noisy condition when compared with orthogonal Fourier-Mellin moments and Zernike moments based methods. 相似文献