首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   90732篇
  免费   1565篇
  国内免费   415篇
电工技术   863篇
综合类   2319篇
化学工业   13495篇
金属工艺   4885篇
机械仪表   3246篇
建筑科学   2463篇
矿业工程   588篇
能源动力   1390篇
轻工业   4734篇
水利工程   1321篇
石油天然气   380篇
无线电   9652篇
一般工业技术   17408篇
冶金工业   3039篇
原子能技术   311篇
自动化技术   26618篇
  2024年   23篇
  2023年   95篇
  2022年   240篇
  2021年   374篇
  2020年   260篇
  2019年   296篇
  2018年   14662篇
  2017年   13621篇
  2016年   10243篇
  2015年   858篇
  2014年   581篇
  2013年   838篇
  2012年   3633篇
  2011年   9973篇
  2010年   8662篇
  2009年   5972篇
  2008年   7111篇
  2007年   8091篇
  2006年   407篇
  2005年   1423篇
  2004年   1291篇
  2003年   1345篇
  2002年   686篇
  2001年   190篇
  2000年   269篇
  1999年   130篇
  1998年   138篇
  1997年   129篇
  1996年   113篇
  1995年   66篇
  1994年   48篇
  1993年   45篇
  1992年   32篇
  1991年   47篇
  1989年   22篇
  1988年   22篇
  1969年   26篇
  1968年   46篇
  1967年   36篇
  1966年   42篇
  1965年   44篇
  1963年   28篇
  1962年   22篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   37篇
  1956年   35篇
  1955年   64篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
971.
Sharing product information has become an integral part of today’s online social networking world. This research study addresses the effects of customer engagement behavior in online social networks on other consumers in order to understand how online social connections impact decision making. We investigate how different variations of a brand-related Facebook post trigger different response reactions. In particular, we analyze under which conditions negative posts can have positive consequences. The results of two online experiments set in a restaurant context suggest a difference when the user knows the restaurant brand. For users who are familiar with the restaurant brand, a positive effect of negative information posted by distant acquaintances is found with regard to the visiting intention of the user. The results of both experiments demonstrate that information posted by a close friend is perceived to be more diagnostic. For users not familiar with the restaurant brand, negative posts from strong ties induce the highest diagnosticity levels.  相似文献   
972.
The problem of determining whether several finite automata accept a word in common is closely related to the well-studied membership problem in transformation monoids. We raise the issue of limiting the number of final states in the automata intersection problem. For automata with two final states, we show the problem to be \({\oplus}\)L-complete or NP-complete according to whether a nontrivial monoid other than a direct product of cyclic groups of order 2 is allowed in the automata. We further consider idempotent commutative automata and (Abelian, mainly) group automata with one, two, or three final states over a singleton or larger alphabet, elucidating (under the usual hypotheses on complexity classes) the complexity of the intersection nonemptiness and related problems in each case.  相似文献   
973.
Real-time services require reliable and fault tolerant communication networks to support their stringent Quality of Service requirements. Multi Topology Routing based IP Fast Re-route (MT-IPFRR) technologies provide seamless forwarding of IP packets during network failures by constructing virtual topologies (VTs) to re-route the disrupted traffic. Multiple Routing Configurations (MRC) is a widely studied MT-IPFRR technique. In this paper, we propose two heuristics, namely mMRC-1 and mMRC-2, to reduce the number of VTs required by the MRC to provide full coverage for single link/node failures, and hence, to decrease its operational complexity. Both heuristics are designed to construct more robust VTs against network partitioning by taking their topological characteristics into consideration. We perform extensive experiments on 3200 topologies with diverse structural properties using our automated topology generation and analysis tool. Numerical results show that the amount of reductions in VT requirements get higher up to 31.84 %, as the networks tend to have more hub nodes whose degree is much higher than the rest of the network.  相似文献   
974.
Partitioning skew has been shown to be a major issue that can significantly prolong the execution time of MapReduce jobs. Most of the existing off-line heuristics for partitioning skew mitigation are inefficient; they have to wait for the completion of all the map tasks. Some solutions can tackle this problem on-line, but will impose an additional overhead by repartitioning the workload of overloaded tasks. In this paper, we present OPTIMA, an on-line partitioning skew mitigation technique for MapReduce. OPTIMA predicts the workload distribution of reduce tasks at run-time, leverages the deviation detection technique to identify the overloaded tasks and pro-actively adjusts resource allocation for these tasks to reduce their execution time. We provide the upper bound of OPTIMA in time complexity, while allowing OPTIMA to perform totally on-line. Through experiments using both real and synthetic workloads running on an 11-node Hadoop cluster, we have observed OPTIMA can effectively mitigate the partitioning skew and improved the job completion time by up to 36.73 % in our experiments.  相似文献   
975.
This article presents a report on APNOMS2015, which was held August 19–21, 2015 in Busan, Korea. The theme of APNOMS2015 was “Managing a Very Connected World.”  相似文献   
976.
Shared protection/restoration is a promising solution for reducing protection resources and is supported at each layer of the current multi-layer networks. Software-defined networking is expected to reduce equipment cost as well as operational cost by orchestrating these shared protection functionalities. However, although protection resource sharing improves link utilization, it sometimes increases the required equipment. Meanwhile, traffic re-aggregation at each layer is an important technique for low volume traffic to utilize the underlying link capacity more efficiently, but re-aggregation also makes it difficult to share protection resources with traffic at lower layers. In this paper, we present multi-layer network design strategy and method that reduce equipment cost by means of both traffic re-aggregation at each layer and protection resource sharing among multiple service traffic at different layers. The strategy first prioritizes traffic re-aggregation at each layer, and then maximally delegates shared protection to lower layers as long as it does not increase the required capacity at the lower layer. Evaluation results from the example three-layer networks confirm that the proposed method can effectively reduce equipment cost compared to the conventional design method. Cost reduction is achieved by leveraging shared protection functions at multiple layers.  相似文献   
977.
In this paper, we discuss the communications reliability requirements posed by the smart power grid with a focus on communications in support of wide area situational awareness. Implementation of wide area situational awareness relies on both transmission substation networks and wide area optical networks. We study the reliability of a sample communications network of the California Power Grid and find that its reliability falls short of proposed requirements. To overcome this issue, we consider the problem of designing the substation network and the wide area network to meet the reliability requirements while minimizing the network cost. For the wide area network design problem, we propose two alternate design approaches, namely: (1) following the power lines and (2) a mesh based design interconnecting the nodes. For the first approach we develop two greedy iterative heuristics and a heuristic integer linear programming (H-ILP) model using minimum cut-sets for network reliability optimization. The greedy iterative algorithms outperform the H-ILP approach in terms of cost, but require a larger amount of computing resources. Both proposed models are in fact complementary and thus provide a framework to optimize the reliability of smart grid communications networks restricted to following the power lines. In the second approach a greenfield mesh network method is proposed based on starting with a minimum spanning tree which is then augmented through a greedy heuristic into a mesh. Comparative numerical results show that the reliable mesh design has advantages in terms of the number of links and total link distance needed.  相似文献   
978.
Multiversion databases store both current and historical data. Rows are typically annotated with timestamps representing the period when the row is/was valid. We develop novel techniques to reduce index maintenance in multiversion databases, so that indexes can be used effectively for analytical queries over current data without being a heavy burden on transaction throughput. To achieve this end, we re-design persistent index data structures in the storage hierarchy to employ an extra level of indirection. The indirection level is stored on solid-state disks that can support very fast random I/Os, so that traversing the extra level of indirection incurs a relatively small overhead. The extra level of indirection dramatically reduces the number of magnetic disk I/Os that are needed for index updates and localizes maintenance to indexes on updated attributes. Additionally, we batch insertions within the indirection layer in order to reduce physical disk I/Os for indexing new records. In this work, we further exploit SSDs by introducing novel DeltaBlock techniques for storing the recent changes to data on SSDs. Using our DeltaBlock, we propose an efficient method to periodically flush the recently changed data from SSDs to HDDs such that, on the one hand, we keep track of every change (or delta) for every record, and, on the other hand, we avoid redundantly storing the unchanged portion of updated records. By reducing the index maintenance overhead on transactions, we enable operational data stores to create more indexes to support queries. We have developed a prototype of our indirection proposal by extending the widely used generalized search tree open-source project, which is also employed in PostgreSQL. Our working implementation demonstrates that we can significantly reduce index maintenance and/or query processing cost by a factor of 3. For the insertion of new records, our novel batching technique can save up to 90 % of the insertion time. For updates, our prototype demonstrates that we can significantly reduce the database size by up to 80 % even with a modest space allocated for DeltaBlocks on SSDs.  相似文献   
979.
Analytical workloads in data warehouses often include heavy joins where queries involve multiple fact tables in addition to the typical star-patterns, dimensional grouping and selections. In this paper we propose a new processing and storage framework called bitwise dimensional co-clustering (BDCC) that avoids replication and thus keeps updates fast, yet is able to accelerate all these foreign key joins, efficiently support grouping and pushes down most dimensional selections. The core idea of BDCC is to cluster each table on a mix of dimensions, each possibly derived from attributes imported over an incoming foreign key and this way creating foreign key connected tables with partially shared clusterings. These are later used to accelerate any join between two tables that have some dimension in common and additionally permit to push down and propagate selections (reduce I/O) and accelerate aggregation and ordering operations. Besides the general framework, we describe an algorithm to derive such a physical co-clustering database automatically and describe query processing and query optimization techniques that can easily be fitted into existing relational engines. We present an experimental evaluation on the TPC-H benchmark in the Vectorwise system, showing that co-clustering can significantly enhance its already high performance and at the same time significantly reduce the memory consumption of the system.  相似文献   
980.
Bit-vectors are widely used for indexing and summarizing data due to their efficient processing in modern computers. Sparse bit-vectors can be further compressed to reduce their space requirement. Special compression schemes based on run-length encoders have been designed to avoid explicit decompression and minimize the decoding overhead during query execution. Moreover, highly compressed bit-vectors can exhibit a faster query time than the non-compressed ones. However, for hard-to-compress bit-vectors, compression does not speed up queries and can add considerable overhead. In these cases, bit-vectors are often stored verbatim (non-compressed). On the other hand, queries are answered by executing a cascade of bit-wise operations involving indexed bit-vectors and intermediate results. Often, even when the original bit-vectors are hard to compress, the intermediate results become sparse. It could be feasible to improve query performance by compressing these bit-vectors as the query is executed. In this scenario, it would be necessary to operate verbatim and compressed bit-vectors together. In this paper, we propose a hybrid framework where compressed and verbatim bitmaps can coexist and design algorithms to execute queries under this hybrid model. Our query optimizer is able to decide at run time when to compress or decompress a bit-vector. Our heuristics show that the applications using higher-density bitmaps can benefit from using this hybrid model, improving both their query time and memory utilization.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号