首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   38107篇
  免费   4125篇
  国内免费   2248篇
电工技术   2516篇
技术理论   3篇
综合类   3227篇
化学工业   6219篇
金属工艺   2232篇
机械仪表   2488篇
建筑科学   2865篇
矿业工程   1353篇
能源动力   1155篇
轻工业   2901篇
水利工程   778篇
石油天然气   1663篇
武器工业   319篇
无线电   4462篇
一般工业技术   4638篇
冶金工业   1752篇
原子能技术   476篇
自动化技术   5433篇
  2024年   151篇
  2023年   599篇
  2022年   1127篇
  2021年   1620篇
  2020年   1221篇
  2019年   1104篇
  2018年   1176篇
  2017年   1256篇
  2016年   1190篇
  2015年   1626篇
  2014年   1965篇
  2013年   2425篇
  2012年   2563篇
  2011年   2575篇
  2010年   2419篇
  2009年   2196篇
  2008年   2170篇
  2007年   1984篇
  2006年   1971篇
  2005年   1692篇
  2004年   1299篇
  2003年   1522篇
  2002年   2009篇
  2001年   1753篇
  2000年   1091篇
  1999年   861篇
  1998年   560篇
  1997年   459篇
  1996年   373篇
  1995年   344篇
  1994年   272篇
  1993年   176篇
  1992年   164篇
  1991年   143篇
  1990年   92篇
  1989年   76篇
  1988年   59篇
  1987年   34篇
  1986年   27篇
  1985年   21篇
  1984年   10篇
  1982年   15篇
  1981年   11篇
  1980年   23篇
  1979年   10篇
  1978年   5篇
  1977年   6篇
  1974年   4篇
  1959年   6篇
  1951年   5篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
961.
在分析层次工作流建模和工作流执行时互操作研究的基础上,设计了一个基于代理的邦联及子流调用框架,将层次化建模技术、流程互操作技术和异构数据模型映射和转换技术集成在一起,同时提供了异常处理和灾难恢复功能,真正从业务层次上解决了流程协作问题,并在TiPLM2.9上实现了该解决方案.流程实例之间通过代理进行交互,各个流程独立运行在自己的工作流引擎中,实现了松耦合和隔离性,增强了系统健壮性.  相似文献   
962.
We describe a data deduplication system for backup storage of PC disk images, named in-RAM metadata utilizing deduplication (IR-MUD). In-RAM hash granularity adaptation and miniLZO based data compression are firstly proposed to reduce the in-RAM metadata size and thereby reduce the space overheads required by the in-RAM metadata caches. Secondly, an in-RAM metadata write cache, as opposed to the traditional metadata read cache, is proposed for further reducing metadata-related disk I/O operations and improving deduplication throughput. During deduplication, the metadata write cache is managed following the LRU caching policy. For each manifest that is hit in the metadata write cache, an expensive manifest reloading operation from the disk is avoided. After deduplication, all the manifests in the metadata write cache are cleared and stored on the disk. Our experimental results using 1.5 TB real-world disk image dataset show that 1) IR-MUD achieved about 95% size reduction for the deduplication metadata, with a small time overhead introduced, 2) when the metadata write cache was not utilized, with the same RAM space size for the metadata read cache, IR-MUD achieved a 400% higher RAM hit ratio and a 50% higher deduplication throughput, as compared with the classic Sparse Indexing deduplication system where no metadata utilization approaches are utilized, and 3) when the metadata write cache was utilized and enough RAM space was available, IR-MUD achieved a 500% higher RAM hit ratio compared with Sparse Indexing and a 70% higher deduplication throughput compared with IR-MUD with only a single metadata read cache. The in-RAM metadata harnessing and metadata write caching approaches of IR-MUD can be applied in most parallel deduplication systems for improving metadata caching efficiency.  相似文献   
963.
With the increasing availability of real-time traffic information, dynamic spatial networks are pervasive nowadays and path planning in dynamic spatial networks becomes an important issue. In this light, we propose and investigate a novel problem of dynamically monitoring shortest paths in spatial networks (DSPM query). When a traveler aims to a destination, his/her shortest path to the destination may change due to two reasons: 1) the travel costs of some edges have been updated and 2) the traveler deviates from the pre-planned path. Our target is to accelerate the shortest path computing in dynamic spatial networks, and we believe that this study may be useful in many mobile applications, such as route planning and recommendation, car navigation and tracking, and location-based services in general. This problem is challenging due to two reasons: 1) how to maintain and reuse the existing computation results to accelerate the following computations, and 2) how to prune the search space effectively. To overcome these challenges, filter-and-refinement paradigm is adopted. We maintain an expansion tree and define a pair of upper and lower bounds to prune the search space. A series of optimization techniques are developed to accelerate the shortest path computing. The performance of the developed methods is studied in extensive experiments based on real spatial data.  相似文献   
964.
965.
The concept of green storage in cluster computing has recently attracted enormous interest among researchers. Consequently, several energy‐efficient solutions, such as multi‐speed disks and disk spin down methods, have been proposed to conserve power in storage systems and improve disk access. Some researchers have assessed their proposed solutions via simulations, while others have used real‐world experiments. Both methods have advantages and disadvantages. Simulations can more swiftly assess the benefits of energy‐efficient solutions, but various measurement errors can arise from procedural shortcomings. For instance, many power simulation tools fail to consider how heat increases the power overhead of disk operations. Some researchers claim that their modeling methods reduce the measurement error to 5% in the single disk model. However, the demand for large‐scale storage systems is growing rapidly. Traditional power measurement using a single disk model is unsuited to such systems because of their complex storage architecture and the unpredictability of numerous disks. Consequently, a number of studies have conducted real machine experiments to assess the performance of their solutions in terms of power conservation, but such experiments are time consuming. To address this problem, this study proposes an efficient simulation tool called Benchmark Analysis Software for Energy‐efficient Solution (BASE), which can accurately estimate disks' power consumption in large‐scale storage systems. We evaluate the performance of BASE on real‐world traces of Academia Sinica (Taiwan) and Florida International University. BASE incorporates an analytical method for assessing the reliability of energy‐efficient solutions. The analytical results demonstrate that the measurement error of BASE is 2.5% lower than that achieved in real‐world experiments involving energy‐estimation experiments. Moreover, the results of simulations to assess solution reliability are identical to those obtained through real‐world experiments. Copyright © 2015 Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
966.
967.
This paper considers the problem of computing general commutative and associative aggregate functions (such as Sum) over distributed inputs held by nodes in a distributed system, while tolerating failures. Specifically, there are N nodes in the system, and the topology among them is modeled as a general undirected graph. Whenever a node sends a message, the message is received by all of its neighbors in the graph. Each node has an input, and the goal is for a special root node (e.g., the base station in wireless sensor networks or the gateway node in wireless ad hoc networks) to learn a certain commutative and associate aggregate of all these inputs. All nodes in the system except the root node may experience crash failures, with the total number of edges incidental to failed nodes being upper bounded by f. The timing model is synchronous where protocols proceed in rounds. Within such a context, we focus on the following question:
Under any given constraint on time complexity, what is the lowest communication complexity, in terms of the number of bits sent (i.e., locally broadcast) by each node, needed for computing general commutative and associate aggregate functions?
This work, for the first time, reduces the gap between the upper bound and the lower bound for the above question from polynomial to polylog. To achieve this reduction, we present significant improvements over both the existing upper bounds and the existing lower bounds on the problem.
  相似文献   
968.
969.
970.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号