首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   91845篇
  免费   1584篇
  国内免费   427篇
电工技术   887篇
综合类   2332篇
化学工业   13514篇
金属工艺   4924篇
机械仪表   3207篇
建筑科学   2465篇
矿业工程   598篇
能源动力   1316篇
轻工业   3958篇
水利工程   1333篇
石油天然气   369篇
无线电   9945篇
一般工业技术   18414篇
冶金工业   3584篇
原子能技术   330篇
自动化技术   26680篇
  2023年   132篇
  2022年   259篇
  2021年   417篇
  2020年   287篇
  2019年   249篇
  2018年   14664篇
  2017年   13558篇
  2016年   10237篇
  2015年   893篇
  2014年   653篇
  2013年   730篇
  2012年   3653篇
  2011年   9964篇
  2010年   8655篇
  2009年   5918篇
  2008年   7161篇
  2007年   8131篇
  2006年   429篇
  2005年   1450篇
  2004年   1336篇
  2003年   1358篇
  2002年   720篇
  2001年   213篇
  2000年   284篇
  1999年   172篇
  1998年   242篇
  1997年   181篇
  1996年   148篇
  1995年   92篇
  1994年   84篇
  1993年   98篇
  1992年   64篇
  1991年   72篇
  1990年   50篇
  1989年   37篇
  1988年   53篇
  1987年   40篇
  1985年   42篇
  1984年   37篇
  1976年   60篇
  1968年   50篇
  1967年   40篇
  1966年   55篇
  1965年   49篇
  1959年   36篇
  1958年   42篇
  1957年   38篇
  1956年   36篇
  1955年   67篇
  1954年   69篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
Partitioning skew has been shown to be a major issue that can significantly prolong the execution time of MapReduce jobs. Most of the existing off-line heuristics for partitioning skew mitigation are inefficient; they have to wait for the completion of all the map tasks. Some solutions can tackle this problem on-line, but will impose an additional overhead by repartitioning the workload of overloaded tasks. In this paper, we present OPTIMA, an on-line partitioning skew mitigation technique for MapReduce. OPTIMA predicts the workload distribution of reduce tasks at run-time, leverages the deviation detection technique to identify the overloaded tasks and pro-actively adjusts resource allocation for these tasks to reduce their execution time. We provide the upper bound of OPTIMA in time complexity, while allowing OPTIMA to perform totally on-line. Through experiments using both real and synthetic workloads running on an 11-node Hadoop cluster, we have observed OPTIMA can effectively mitigate the partitioning skew and improved the job completion time by up to 36.73 % in our experiments.  相似文献   
992.
This article presents a report on APNOMS2015, which was held August 19–21, 2015 in Busan, Korea. The theme of APNOMS2015 was “Managing a Very Connected World.”  相似文献   
993.
Shared protection/restoration is a promising solution for reducing protection resources and is supported at each layer of the current multi-layer networks. Software-defined networking is expected to reduce equipment cost as well as operational cost by orchestrating these shared protection functionalities. However, although protection resource sharing improves link utilization, it sometimes increases the required equipment. Meanwhile, traffic re-aggregation at each layer is an important technique for low volume traffic to utilize the underlying link capacity more efficiently, but re-aggregation also makes it difficult to share protection resources with traffic at lower layers. In this paper, we present multi-layer network design strategy and method that reduce equipment cost by means of both traffic re-aggregation at each layer and protection resource sharing among multiple service traffic at different layers. The strategy first prioritizes traffic re-aggregation at each layer, and then maximally delegates shared protection to lower layers as long as it does not increase the required capacity at the lower layer. Evaluation results from the example three-layer networks confirm that the proposed method can effectively reduce equipment cost compared to the conventional design method. Cost reduction is achieved by leveraging shared protection functions at multiple layers.  相似文献   
994.
In this paper, we discuss the communications reliability requirements posed by the smart power grid with a focus on communications in support of wide area situational awareness. Implementation of wide area situational awareness relies on both transmission substation networks and wide area optical networks. We study the reliability of a sample communications network of the California Power Grid and find that its reliability falls short of proposed requirements. To overcome this issue, we consider the problem of designing the substation network and the wide area network to meet the reliability requirements while minimizing the network cost. For the wide area network design problem, we propose two alternate design approaches, namely: (1) following the power lines and (2) a mesh based design interconnecting the nodes. For the first approach we develop two greedy iterative heuristics and a heuristic integer linear programming (H-ILP) model using minimum cut-sets for network reliability optimization. The greedy iterative algorithms outperform the H-ILP approach in terms of cost, but require a larger amount of computing resources. Both proposed models are in fact complementary and thus provide a framework to optimize the reliability of smart grid communications networks restricted to following the power lines. In the second approach a greenfield mesh network method is proposed based on starting with a minimum spanning tree which is then augmented through a greedy heuristic into a mesh. Comparative numerical results show that the reliable mesh design has advantages in terms of the number of links and total link distance needed.  相似文献   
995.
Multiversion databases store both current and historical data. Rows are typically annotated with timestamps representing the period when the row is/was valid. We develop novel techniques to reduce index maintenance in multiversion databases, so that indexes can be used effectively for analytical queries over current data without being a heavy burden on transaction throughput. To achieve this end, we re-design persistent index data structures in the storage hierarchy to employ an extra level of indirection. The indirection level is stored on solid-state disks that can support very fast random I/Os, so that traversing the extra level of indirection incurs a relatively small overhead. The extra level of indirection dramatically reduces the number of magnetic disk I/Os that are needed for index updates and localizes maintenance to indexes on updated attributes. Additionally, we batch insertions within the indirection layer in order to reduce physical disk I/Os for indexing new records. In this work, we further exploit SSDs by introducing novel DeltaBlock techniques for storing the recent changes to data on SSDs. Using our DeltaBlock, we propose an efficient method to periodically flush the recently changed data from SSDs to HDDs such that, on the one hand, we keep track of every change (or delta) for every record, and, on the other hand, we avoid redundantly storing the unchanged portion of updated records. By reducing the index maintenance overhead on transactions, we enable operational data stores to create more indexes to support queries. We have developed a prototype of our indirection proposal by extending the widely used generalized search tree open-source project, which is also employed in PostgreSQL. Our working implementation demonstrates that we can significantly reduce index maintenance and/or query processing cost by a factor of 3. For the insertion of new records, our novel batching technique can save up to 90 % of the insertion time. For updates, our prototype demonstrates that we can significantly reduce the database size by up to 80 % even with a modest space allocated for DeltaBlocks on SSDs.  相似文献   
996.
Analytical workloads in data warehouses often include heavy joins where queries involve multiple fact tables in addition to the typical star-patterns, dimensional grouping and selections. In this paper we propose a new processing and storage framework called bitwise dimensional co-clustering (BDCC) that avoids replication and thus keeps updates fast, yet is able to accelerate all these foreign key joins, efficiently support grouping and pushes down most dimensional selections. The core idea of BDCC is to cluster each table on a mix of dimensions, each possibly derived from attributes imported over an incoming foreign key and this way creating foreign key connected tables with partially shared clusterings. These are later used to accelerate any join between two tables that have some dimension in common and additionally permit to push down and propagate selections (reduce I/O) and accelerate aggregation and ordering operations. Besides the general framework, we describe an algorithm to derive such a physical co-clustering database automatically and describe query processing and query optimization techniques that can easily be fitted into existing relational engines. We present an experimental evaluation on the TPC-H benchmark in the Vectorwise system, showing that co-clustering can significantly enhance its already high performance and at the same time significantly reduce the memory consumption of the system.  相似文献   
997.
Bit-vectors are widely used for indexing and summarizing data due to their efficient processing in modern computers. Sparse bit-vectors can be further compressed to reduce their space requirement. Special compression schemes based on run-length encoders have been designed to avoid explicit decompression and minimize the decoding overhead during query execution. Moreover, highly compressed bit-vectors can exhibit a faster query time than the non-compressed ones. However, for hard-to-compress bit-vectors, compression does not speed up queries and can add considerable overhead. In these cases, bit-vectors are often stored verbatim (non-compressed). On the other hand, queries are answered by executing a cascade of bit-wise operations involving indexed bit-vectors and intermediate results. Often, even when the original bit-vectors are hard to compress, the intermediate results become sparse. It could be feasible to improve query performance by compressing these bit-vectors as the query is executed. In this scenario, it would be necessary to operate verbatim and compressed bit-vectors together. In this paper, we propose a hybrid framework where compressed and verbatim bitmaps can coexist and design algorithms to execute queries under this hybrid model. Our query optimizer is able to decide at run time when to compress or decompress a bit-vector. Our heuristics show that the applications using higher-density bitmaps can benefit from using this hybrid model, improving both their query time and memory utilization.  相似文献   
998.
State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.  相似文献   
999.
1000.
This paper studies the problem of how to conduct external sorting on flash drives while avoiding intermediate writes to the disk. The focus is on sort in portable electronic devices, where relations are only larger than the main memory by a small factor, and on sort as part of distributed processes where relations are frequently partially sorted. In such cases, sort algorithms that refrain from writing intermediate results to the disk have three advantages over algorithms that perform intermediate writes. First, on devices in which read operations are much faster than writes, such methods are efficient and frequently outperform Merge Sort. Secondly, they reduce flash cell degradation caused by writes. Thirdly, they can be used in cases where there is not enough disk space for the intermediate results. Novel sort algorithms that avoid intermediate writes to the disk are presented. An experimental evaluation, on different flash storage devices, shows that in many cases the new algorithms can extend the lifespan of the devices by avoiding unnecessary writes to the disk, while maintaining efficiency, in comparison with Merge Sort.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号