首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   565338篇
  免费   7048篇
  国内免费   991篇
电工技术   10504篇
综合类   466篇
化学工业   89479篇
金属工艺   23338篇
机械仪表   18497篇
建筑科学   12529篇
矿业工程   4311篇
能源动力   14430篇
轻工业   45758篇
水利工程   6935篇
石油天然气   14961篇
武器工业   40篇
无线电   60256篇
一般工业技术   116226篇
冶金工业   95542篇
原子能技术   14681篇
自动化技术   45424篇
  2021年   5469篇
  2019年   5264篇
  2018年   9201篇
  2017年   9408篇
  2016年   9829篇
  2015年   6024篇
  2014年   10243篇
  2013年   26120篇
  2012年   15773篇
  2011年   21089篇
  2010年   16970篇
  2009年   18817篇
  2008年   19067篇
  2007年   18764篇
  2006年   16286篇
  2005年   14742篇
  2004年   14006篇
  2003年   13689篇
  2002年   13246篇
  2001年   12851篇
  2000年   12329篇
  1999年   12010篇
  1998年   27648篇
  1997年   20041篇
  1996年   15628篇
  1995年   12005篇
  1994年   10858篇
  1993年   10611篇
  1992年   8331篇
  1991年   8102篇
  1990年   7991篇
  1989年   7762篇
  1988年   7479篇
  1987年   6747篇
  1986年   6533篇
  1985年   7390篇
  1984年   6701篇
  1983年   6436篇
  1982年   5777篇
  1981年   5896篇
  1980年   5630篇
  1979年   5733篇
  1978年   5663篇
  1977年   6178篇
  1976年   7685篇
  1975年   5108篇
  1974年   4903篇
  1973年   4980篇
  1972年   4279篇
  1971年   4038篇
排序方式: 共有10000条查询结果,搜索用时 864 毫秒
811.
View materialization is a well-known optimization technique of relational database systems. We present a similar, yet more powerful, optimization concept for object-oriented data models: function materialization. Exploiting the object-oriented paradigm-namely, classification, object identity, and encapsulation-facilitates a rather easy incorporation of function materialization into (existing) object-oriented systems. Only those types (classes) whose instances are involved in some materialization are appropriately modified and recompiled, thus leaving the remainder of the object system invariant. Furthermore, the exploitation of encapsulation (information hiding) and object identity provides for additional performance tuning measures that drastically decrease the invalidation and rematerialization overhead incurred by updates in the object base. First, it allows us to cleanly separate the object instances that are irrelevant for the materialized functions from those that are involved in the materialization of some function result, and this to penalize only those involved objects upon update. Second, the principle of information hiding facilitates fine-grained control over the invalidation of precomputed results. Based on specifications given by the data type implementor, the system can exploit operational semantics to better distinguish between update operations that invalidate a materialized result and those that require no rematerialization. The paper concludes with a quantitative analysis of function materialization based on two sample performance benchmarks obtained from our experimental object base system GOM  相似文献   
812.
A general language for specifying resource allocation and time-tabling problems is presented. The language is based on an expert system paradigm that was developed previously by the authors and that enables the solution of resource allocation problems by using experts' knowledge and heuristics. The language enables the specification of a problem in terms of resources, activities, allocation rules, and constraints, and thus provides a convenient knowledge acquisition tool. The language syntax is powerful and allows the specification of rules and constraints that are very difficult to formulate with traditional approaches, and it also supports the specification of various control and backtracking strategies. We constructed a generalized inference engine that runs compiled resource allocation problem specification language (RAPS) programs and provides all necessary control structures. This engine acts as an expert system shell and is called expert system for resource allocation (ESRA). The performance of RAPS combined with ESRA is demonstrated by analyzing its solution of a typical resource allocation problem  相似文献   
813.
Efficient algorithms for processing large volumes of data are very important both for relational and new object-oriented database systems. Many query-processing operations can be implemented using sort- or hash-based algorithms, e.g. intersections, joins, and duplicate elimination. In the early relational database systems, only sort-based algorithms were employed. In the last decade, hash-based algorithms have gained acceptance and popularity, and are often considered generally superior to sort-based algorithms such as merge-join. In this article, we compare the concepts behind sort- and hash-based query-processing algorithms and conclude that (1) many dualities exist between the two types of algorithms, (2) their costs differ mostly by percentages rather than by factors, (3) several special cases exist that favor one or the other choice, and (4) there is a strong reason why both hash- and sort-based algorithms should be available in a query-processing system. Our conclusions are supported by experiments performed using the Volcano query execution engine  相似文献   
814.
Considers the use of massively parallel architectures to execute a trace-driven simulation of a single cache set. A method is presented for the least-recently-used (LRU) policy, which, regardless of the set size C, runs in time O(log N) using N processors on the EREW (exclusive read, exclusive write) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. We present timings of this algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference-based line replacement policies are considered, which includes LRU as well as the least-frequently-used (LFU) and random replacement policies. A simulation method is presented for any such policy that, on any trace of length N directed to a C line set, runs in O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation  相似文献   
815.
Multicast communication, in which the same message is delivered from a source node to an arbitrary number of destination nodes, is being increasingly demanded in parallel computing. System supported multicast services can potentially offer improved performance, increased functionality, and simplified programming, and may in turn be used to support various higher-level operations for data movement and global process control. This paper presents efficient algorithms to implement multicast communication in wormhole-routed direct networks, in the absence of hardware multicast support, by exploiting the properties of the switching technology. Minimum-time multicast algorithms are presented for n-dimensional meshes and hypercubes that use deterministic, dimension-ordered routing of unicast messages. Both algorithms can deliver a multicast message to m-1 destinations in [log 2 m] message passing steps, while avoiding contention among the constituent unicast messages. Performance results of implementations on a 64-node nCUBE-2 hypercube and a 168-node Symult 2010 2-D mesh are given  相似文献   
816.
This paper describes several loop transformation techniques for extracting parallelism from nested loop structures. Nested loops can then be scheduled to run in parallel so that execution time is minimized. One technique is called selective cycle shrinking, and the other is called true dependence cycle shrinking. It is shown how selective shrinking is related to linear scheduling of nested loops and how true dependence shrinking is related to conflict-free mappings of higher dimensional algorithms into lower dimensional processor arrays. Methods are proposed in this paper to find the selective and true dependence shrinkings with minimum total execution time by applying the techniques of finding optimal linear schedules and optimal and conflict-free mappings proposed by W. Shang and A.B. Fortes  相似文献   
817.
We develop a characterization for m-fault-tolerant extensions, and for optimal m-fault-tolerant extensions, of a complete multipartite graph. Our formulation shows that this problem is equivalent to an interesting combinatorial problem on the partitioning of integers. This characterization leads to a new procedure for constructing an optimal m-fault-tolerant extension of any complete multipartite graph, for any m⩾0. The proposed procedure is mainly useful when the size of the graph is relatively small, because the search time required is exponential. This exponential search, however, is not always necessary. We prove several necessary conditions that help us, in several cases, to identify some optimal m-fault-tolerant extensions without performing any search  相似文献   
818.
Studies the complexity of the problem of allocating m modules to n processors in a distributed system to minimize total communication and execution costs. When the communication graph is a tree, Bokhari has shown that the optimum allocation can be determined in O(mn2) time. Recently, this result has been generalized by Fernandez-Baca, who has proposed an allocation algorithm in O(mnk+1) when the communication graph is a partial k-tree. The author shows that in the case where communication costs are uniform, the module allocation problem can be solved in O(mn) time if the communication graph is a tree. This algorithm is asymptotically optimum  相似文献   
819.
A new approach is given for scheduling a sequential instruction stream for execution “in parallel” on asynchronous multiprocessors. The key idea in our approach is to exploit the fine grained parallelism present in the instruction stream. In this context, schedules are constructed by a careful balancing of execution and communication costs at the level of individual instructions, and their data dependencies. Three methods are used to evaluate our approach. First, several existing methods are extended to the fine grained situation. Our approach is then compared to these methods using both static schedule length analyses, and simulated executions of the scheduled code. In each instance, our method is found to provide significantly shorter schedules. Second, by varying parameters such as the speed of the instruction set, and the speed/parallelism in the interconnection structure, simulation techniques are used to examine the effects of various architectural considerations on the executions of the schedules. These results show that our approach provides significant speedups in a wide-range of situations. Third, schedules produced by our approach are executed on a two-processor Data General shared memory multiprocessor system. These experiments show that there is a strong correlation between our simulation results, and these actual executions, and thereby serve to validate the simulation studies. Together, our results establish that fine grained parallelism can be exploited in a substantial manner when scheduling a sequential instruction stream for execution “in parallel” on asynchronous multiprocessors  相似文献   
820.
Pipelining and bypassing in a VLIW processor   总被引:1,自引:0,他引:1  
This short note describes issues involved in the bypassing mechanism for a very long instruction word (VLIW) processor and its relation to the pipeline structure of the processor. The authors first describe the pipeline structure of their processor and analyze its performance and compare it to typical RISC-style pipeline structures given the context of a processor with multiple functional units. Next they study the performance effects of various bypassing schemes in terms of their effectiveness in resolving pipeline data hazards and their effect on the processor cycle time  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号