首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   653357篇
  免费   7926篇
  国内免费   1114篇
电工技术   11753篇
综合类   552篇
化学工业   102966篇
金属工艺   27746篇
机械仪表   20909篇
建筑科学   14481篇
矿业工程   5158篇
能源动力   16047篇
轻工业   50902篇
水利工程   8341篇
石油天然气   18426篇
武器工业   42篇
无线电   68277篇
一般工业技术   135147篇
冶金工业   112193篇
原子能技术   17329篇
自动化技术   52128篇
  2021年   6047篇
  2019年   5823篇
  2018年   10243篇
  2017年   10421篇
  2016年   10938篇
  2015年   6732篇
  2014年   11482篇
  2013年   29339篇
  2012年   17657篇
  2011年   23653篇
  2010年   19004篇
  2009年   21212篇
  2008年   21532篇
  2007年   21184篇
  2006年   18454篇
  2005年   16776篇
  2004年   15953篇
  2003年   15567篇
  2002年   15128篇
  2001年   14737篇
  2000年   14135篇
  1999年   13862篇
  1998年   32331篇
  1997年   23352篇
  1996年   18118篇
  1995年   13874篇
  1994年   12514篇
  1993年   12333篇
  1992年   9580篇
  1991年   9363篇
  1990年   9262篇
  1989年   8934篇
  1988年   8626篇
  1987年   7880篇
  1986年   7651篇
  1985年   8555篇
  1984年   7799篇
  1983年   7533篇
  1982年   6800篇
  1981年   6988篇
  1980年   6715篇
  1979年   6912篇
  1978年   6904篇
  1977年   7518篇
  1976年   9449篇
  1975年   6288篇
  1974年   6093篇
  1973年   6202篇
  1972年   5396篇
  1971年   4975篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
911.
A general language for specifying resource allocation and time-tabling problems is presented. The language is based on an expert system paradigm that was developed previously by the authors and that enables the solution of resource allocation problems by using experts' knowledge and heuristics. The language enables the specification of a problem in terms of resources, activities, allocation rules, and constraints, and thus provides a convenient knowledge acquisition tool. The language syntax is powerful and allows the specification of rules and constraints that are very difficult to formulate with traditional approaches, and it also supports the specification of various control and backtracking strategies. We constructed a generalized inference engine that runs compiled resource allocation problem specification language (RAPS) programs and provides all necessary control structures. This engine acts as an expert system shell and is called expert system for resource allocation (ESRA). The performance of RAPS combined with ESRA is demonstrated by analyzing its solution of a typical resource allocation problem  相似文献   
912.
Efficient algorithms for processing large volumes of data are very important both for relational and new object-oriented database systems. Many query-processing operations can be implemented using sort- or hash-based algorithms, e.g. intersections, joins, and duplicate elimination. In the early relational database systems, only sort-based algorithms were employed. In the last decade, hash-based algorithms have gained acceptance and popularity, and are often considered generally superior to sort-based algorithms such as merge-join. In this article, we compare the concepts behind sort- and hash-based query-processing algorithms and conclude that (1) many dualities exist between the two types of algorithms, (2) their costs differ mostly by percentages rather than by factors, (3) several special cases exist that favor one or the other choice, and (4) there is a strong reason why both hash- and sort-based algorithms should be available in a query-processing system. Our conclusions are supported by experiments performed using the Volcano query execution engine  相似文献   
913.
Considers the use of massively parallel architectures to execute a trace-driven simulation of a single cache set. A method is presented for the least-recently-used (LRU) policy, which, regardless of the set size C, runs in time O(log N) using N processors on the EREW (exclusive read, exclusive write) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. We present timings of this algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference-based line replacement policies are considered, which includes LRU as well as the least-frequently-used (LFU) and random replacement policies. A simulation method is presented for any such policy that, on any trace of length N directed to a C line set, runs in O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation  相似文献   
914.
Multicast communication, in which the same message is delivered from a source node to an arbitrary number of destination nodes, is being increasingly demanded in parallel computing. System supported multicast services can potentially offer improved performance, increased functionality, and simplified programming, and may in turn be used to support various higher-level operations for data movement and global process control. This paper presents efficient algorithms to implement multicast communication in wormhole-routed direct networks, in the absence of hardware multicast support, by exploiting the properties of the switching technology. Minimum-time multicast algorithms are presented for n-dimensional meshes and hypercubes that use deterministic, dimension-ordered routing of unicast messages. Both algorithms can deliver a multicast message to m-1 destinations in [log 2 m] message passing steps, while avoiding contention among the constituent unicast messages. Performance results of implementations on a 64-node nCUBE-2 hypercube and a 168-node Symult 2010 2-D mesh are given  相似文献   
915.
This paper describes several loop transformation techniques for extracting parallelism from nested loop structures. Nested loops can then be scheduled to run in parallel so that execution time is minimized. One technique is called selective cycle shrinking, and the other is called true dependence cycle shrinking. It is shown how selective shrinking is related to linear scheduling of nested loops and how true dependence shrinking is related to conflict-free mappings of higher dimensional algorithms into lower dimensional processor arrays. Methods are proposed in this paper to find the selective and true dependence shrinkings with minimum total execution time by applying the techniques of finding optimal linear schedules and optimal and conflict-free mappings proposed by W. Shang and A.B. Fortes  相似文献   
916.
We develop a characterization for m-fault-tolerant extensions, and for optimal m-fault-tolerant extensions, of a complete multipartite graph. Our formulation shows that this problem is equivalent to an interesting combinatorial problem on the partitioning of integers. This characterization leads to a new procedure for constructing an optimal m-fault-tolerant extension of any complete multipartite graph, for any m⩾0. The proposed procedure is mainly useful when the size of the graph is relatively small, because the search time required is exponential. This exponential search, however, is not always necessary. We prove several necessary conditions that help us, in several cases, to identify some optimal m-fault-tolerant extensions without performing any search  相似文献   
917.
Studies the complexity of the problem of allocating m modules to n processors in a distributed system to minimize total communication and execution costs. When the communication graph is a tree, Bokhari has shown that the optimum allocation can be determined in O(mn2) time. Recently, this result has been generalized by Fernandez-Baca, who has proposed an allocation algorithm in O(mnk+1) when the communication graph is a partial k-tree. The author shows that in the case where communication costs are uniform, the module allocation problem can be solved in O(mn) time if the communication graph is a tree. This algorithm is asymptotically optimum  相似文献   
918.
A new approach is given for scheduling a sequential instruction stream for execution “in parallel” on asynchronous multiprocessors. The key idea in our approach is to exploit the fine grained parallelism present in the instruction stream. In this context, schedules are constructed by a careful balancing of execution and communication costs at the level of individual instructions, and their data dependencies. Three methods are used to evaluate our approach. First, several existing methods are extended to the fine grained situation. Our approach is then compared to these methods using both static schedule length analyses, and simulated executions of the scheduled code. In each instance, our method is found to provide significantly shorter schedules. Second, by varying parameters such as the speed of the instruction set, and the speed/parallelism in the interconnection structure, simulation techniques are used to examine the effects of various architectural considerations on the executions of the schedules. These results show that our approach provides significant speedups in a wide-range of situations. Third, schedules produced by our approach are executed on a two-processor Data General shared memory multiprocessor system. These experiments show that there is a strong correlation between our simulation results, and these actual executions, and thereby serve to validate the simulation studies. Together, our results establish that fine grained parallelism can be exploited in a substantial manner when scheduling a sequential instruction stream for execution “in parallel” on asynchronous multiprocessors  相似文献   
919.
Pipelining and bypassing in a VLIW processor   总被引:1,自引:0,他引:1  
This short note describes issues involved in the bypassing mechanism for a very long instruction word (VLIW) processor and its relation to the pipeline structure of the processor. The authors first describe the pipeline structure of their processor and analyze its performance and compare it to typical RISC-style pipeline structures given the context of a processor with multiple functional units. Next they study the performance effects of various bypassing schemes in terms of their effectiveness in resolving pipeline data hazards and their effect on the processor cycle time  相似文献   
920.
The cascade correlation is a very flexible, efficient and fast algorithm for supervised learning. It incrementally builds the network by adding hidden units one at a time, until the desired input/output mapping is achieved. It connects all the previously installed units to the new unit being added. Consequently, each new unit in effect adds a new layer and the fan-in of the hidden and output units keeps on increasing as more units get added. The resulting structure could be hard to implement in VLSI, because the connections are irregular and the fan-in is unbounded. Moreover, the depth or the propagation delay through the resulting network is directly proportional to the number of units and can be excessive. We have modified the algorithm to generate networks with restricted fan-in and small depth (propagation delay) by controlling the connectivity. Our results reveal that there is a tradeoff between connectivity and other performance attributes like depth, total number of independent parameters, and learning time.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号