首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   287387篇
  免费   3319篇
  国内免费   816篇
电工技术   5035篇
综合类   162篇
化学工业   45405篇
金属工艺   11896篇
机械仪表   8604篇
建筑科学   6849篇
矿业工程   1834篇
能源动力   6742篇
轻工业   25908篇
水利工程   3191篇
石油天然气   7086篇
武器工业   16篇
无线电   31453篇
一般工业技术   56940篇
冶金工业   51381篇
原子能技术   7533篇
自动化技术   21487篇
  2021年   2099篇
  2018年   3673篇
  2017年   3623篇
  2016年   3904篇
  2015年   2464篇
  2014年   4230篇
  2013年   12020篇
  2012年   6831篇
  2011年   9214篇
  2010年   7571篇
  2009年   8643篇
  2008年   8931篇
  2007年   8813篇
  2006年   7831篇
  2005年   7312篇
  2004年   6801篇
  2003年   6553篇
  2002年   6628篇
  2001年   6500篇
  2000年   6179篇
  1999年   6229篇
  1998年   14754篇
  1997年   11031篇
  1996年   8569篇
  1995年   6525篇
  1994年   5916篇
  1993年   5788篇
  1992年   4479篇
  1991年   4440篇
  1990年   4284篇
  1989年   4301篇
  1988年   4255篇
  1987年   3598篇
  1986年   3595篇
  1985年   4161篇
  1984年   3965篇
  1983年   3621篇
  1982年   3422篇
  1981年   3558篇
  1980年   3422篇
  1979年   3367篇
  1978年   3446篇
  1977年   3925篇
  1976年   5038篇
  1975年   3158篇
  1974年   3007篇
  1973年   3024篇
  1972年   2651篇
  1971年   2473篇
  1970年   2104篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
211.
Local autonomous dynamic channel allocation (LADCA) including power control is essential to accommodating the anticipated explosion of demand for wireless. The authors simulate call performance for users accessing channels in a regular cellular array with a base located at the center of each hexagon. The computer model includes stochastic channel demand and a propagation environment characterized by attenuation with distance as well as shadow fading. The study of LADCA shows that distributed power control and channel access can be combined in an access management policy that achieves satisfactory system capacity and provides desired call performance. The authors report: LADCA/power control is observed to be stable alleviating a major concern about users unaware of the signal to interference problems their presence on a channel might cause to others. There can be substantial inadvertent dropping of calls in progress caused by originating calls. Modeling user time dynamics is essential. LADCA contrasts very favorably with fixed channel allocation (FCA) in a comparative example  相似文献   
212.
View materialization is a well-known optimization technique of relational database systems. We present a similar, yet more powerful, optimization concept for object-oriented data models: function materialization. Exploiting the object-oriented paradigm-namely, classification, object identity, and encapsulation-facilitates a rather easy incorporation of function materialization into (existing) object-oriented systems. Only those types (classes) whose instances are involved in some materialization are appropriately modified and recompiled, thus leaving the remainder of the object system invariant. Furthermore, the exploitation of encapsulation (information hiding) and object identity provides for additional performance tuning measures that drastically decrease the invalidation and rematerialization overhead incurred by updates in the object base. First, it allows us to cleanly separate the object instances that are irrelevant for the materialized functions from those that are involved in the materialization of some function result, and this to penalize only those involved objects upon update. Second, the principle of information hiding facilitates fine-grained control over the invalidation of precomputed results. Based on specifications given by the data type implementor, the system can exploit operational semantics to better distinguish between update operations that invalidate a materialized result and those that require no rematerialization. The paper concludes with a quantitative analysis of function materialization based on two sample performance benchmarks obtained from our experimental object base system GOM  相似文献   
213.
A general language for specifying resource allocation and time-tabling problems is presented. The language is based on an expert system paradigm that was developed previously by the authors and that enables the solution of resource allocation problems by using experts' knowledge and heuristics. The language enables the specification of a problem in terms of resources, activities, allocation rules, and constraints, and thus provides a convenient knowledge acquisition tool. The language syntax is powerful and allows the specification of rules and constraints that are very difficult to formulate with traditional approaches, and it also supports the specification of various control and backtracking strategies. We constructed a generalized inference engine that runs compiled resource allocation problem specification language (RAPS) programs and provides all necessary control structures. This engine acts as an expert system shell and is called expert system for resource allocation (ESRA). The performance of RAPS combined with ESRA is demonstrated by analyzing its solution of a typical resource allocation problem  相似文献   
214.
Efficient algorithms for processing large volumes of data are very important both for relational and new object-oriented database systems. Many query-processing operations can be implemented using sort- or hash-based algorithms, e.g. intersections, joins, and duplicate elimination. In the early relational database systems, only sort-based algorithms were employed. In the last decade, hash-based algorithms have gained acceptance and popularity, and are often considered generally superior to sort-based algorithms such as merge-join. In this article, we compare the concepts behind sort- and hash-based query-processing algorithms and conclude that (1) many dualities exist between the two types of algorithms, (2) their costs differ mostly by percentages rather than by factors, (3) several special cases exist that favor one or the other choice, and (4) there is a strong reason why both hash- and sort-based algorithms should be available in a query-processing system. Our conclusions are supported by experiments performed using the Volcano query execution engine  相似文献   
215.
We present new methods for load balancing of unstructured tree computations on large-scale SIMD machines, and analyze the scalability of these and other existing schemes. An efficient formulation of tree search on an SIMD machine consists of two major components: a triggering mechanism, which determines when the search space redistribution must occur to balance the search space over processors, and a scheme to redistribute the search space. We have devised a new redistribution mechanism and a new triggering mechanism. Either of these can be used in conjunction with triggering and redistribution mechanisms developed by other researchers. We analyze the scalability of these mechanisms and verify the results experimentally. The analysis and experiments show that our new load-balancing methods are highly scalable on SIMD architectures. Their scalability is shown to he no worse than that of the best load-balancing schemes on MIMD architectures. We verify our theoretical results by implementing the 15-puzzle problem on a CM-2 SIMD parallel computer  相似文献   
216.
in the above paper Yu (IEEE Trans. Neural Networks, vol.3, no.6, p.1019-21 (1992)) claims to prove that local minima do not exist in the error surface of backpropagation networks being trained on data with t distinct input patterns when the network is capable of exactly representing arbitrary mappings on t input patterns. The commenter points out that the proof presented is flawed, so that the resulting claims remain unproved. In reply, Yu points out that the undesired phenomenon that was sited can be avoided by simply imposing the arbitrary mapping capacity of the network on lemma 1 in the article.  相似文献   
217.
218.
Volcano-an extensible and parallel query evaluation system   总被引:2,自引:0,他引:2  
To investigate the interactions of extensibility and parallelism in database query processing, we have developed a new dataflow query execution system called Volcano. The Volcano effort provides a rich environment for research and education in database systems design, heuristics for query optimization, parallel query execution, and resource allocation. Volcano uses a standard interface between algebra operators, allowing easy addition of new operators and operator implementations. Operations on individual items, e.g., predicates, are imported into the query processing operators using support functions. The semantics of support functions is not prescribed; any data type including complex objects and any operation can be realized. Thus, Volcano is extensible with new operators, algorithms, data types, and type-specific methods. Volcano includes two novel meta-operators. The choose-plan meta-operator supports dynamic query evaluation plans that allow delaying selected optimization decisions until run-time, e.g., for embedded queries with free variables. The exchange meta-operator supports intra-operator parallelism on partitioned datasets and both vertical and horizontal inter-operator parallelism, translating between demand-driven dataflow within processes and data-driven dataflow between processes. All operators, with the exception of the exchange operator, have been designed and implemented in a single-process environment, and parallelized using the exchange operator. Even operators not yet designed can be parallelized using this new operator if they use and provide the interator interface. Thus, the issues of data manipulation and parallelism have become orthogonal, making Volcano the first implemented query execution engine that effectively combines extensibility and parallelism  相似文献   
219.
We present the design of E-kernel, an embedding kernel on the Victor V256 message-passing partitionable multiprocessor, developed for the support of program mapping and network reconfiguration. E-kernel supports the embedding of a new network topology onto Victor's 2D mesh and also the embedding of a task graph onto the 2D mesh network or the reconfigured network. In the current implementation, the reconfigured network can be a line or an even-size ring, and the task graphs meshes or tori of a variety of dimensions and shapes or graphs with similar topologies. For application programs having these task graph topologies and that are designed according to the communication model of E-kernel, they can be run without any change on partitions connected by the 2D mesh, line, or ring. Further, E-kernel attempts the communication optimization of these programs on the different networks automatically, thus making both the network topology and the communication optimization attempt completely transparent to the application programs. Many of the embeddings used in E-kernel are optimal or asymptotically optimal (with respect to minimum dilation cost). The implementation of E-kernel translated some of the many theoretical results in graph embeddings into practical tools for program mapping and network reconfiguration in a parallel system. E-kernel is functional on Victor V256. Measurements of E-kernel's performance on V256 are also included  相似文献   
220.
Nonlinear adaptive filters based on a variety of neural network models have been used successfully for system identification and noise-cancellation in a wide class of applications. An important problem in data communications is that of channel equalization, i.e., the removal of interferences introduced by linear or nonlinear message corrupting mechanisms, so that the originally transmitted symbols can be recovered correctly at the receiver. In this paper we introduce an adaptive recurrent neural network (RNN) based equalizer whose small size and high performance makes it suitable for high-speed channel equalization. We propose RNN based structures for both trained adaptation and blind equalization, and we evaluate their performance via extensive simulations for a variety of signal modulations and communication channel models. It is shown that the RNN equalizers have comparable performance with traditional linear filter based equalizers when the channel interferences are relatively mild, and that they outperform them by several orders of magnitude when either the channel's transfer function has spectral nulls or severe nonlinear distortion is present. In addition, the small-size RNN equalizers, being essentially generalized IIR filters, are shown to outperform multilayer perceptron equalizers of larger computational complexity in linear and nonlinear channel equalization cases.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号