首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   669249篇
  免费   8303篇
  国内免费   1941篇
电工技术   12411篇
综合类   527篇
化学工业   99710篇
金属工艺   29711篇
机械仪表   22895篇
建筑科学   14994篇
矿业工程   4074篇
能源动力   17146篇
轻工业   50087篇
水利工程   7347篇
石油天然气   13524篇
武器工业   28篇
无线电   76648篇
一般工业技术   137003篇
冶金工业   119112篇
原子能技术   14751篇
自动化技术   59525篇
  2021年   5331篇
  2020年   4119篇
  2019年   5175篇
  2018年   14313篇
  2017年   14582篇
  2016年   12755篇
  2015年   6413篇
  2014年   10156篇
  2013年   29106篇
  2012年   17923篇
  2011年   27410篇
  2010年   22846篇
  2009年   24372篇
  2008年   24904篇
  2007年   25945篇
  2006年   18060篇
  2005年   18658篇
  2004年   16747篇
  2003年   16388篇
  2002年   15198篇
  2001年   14592篇
  2000年   13828篇
  1999年   14194篇
  1998年   35746篇
  1997年   25315篇
  1996年   19549篇
  1995年   14653篇
  1994年   13036篇
  1993年   12923篇
  1992年   9519篇
  1991年   9086篇
  1990年   8976篇
  1989年   8699篇
  1988年   8437篇
  1987年   7205篇
  1986年   7171篇
  1985年   8132篇
  1984年   7575篇
  1983年   6899篇
  1982年   6446篇
  1981年   6563篇
  1980年   6174篇
  1979年   6027篇
  1978年   6150篇
  1977年   7017篇
  1976年   9048篇
  1975年   5398篇
  1974年   5070篇
  1973年   5203篇
  1972年   4424篇
排序方式: 共有10000条查询结果,搜索用时 335 毫秒
931.
The object-oriented approach to system structuring has found widespread acceptance among designers and developers of robust computing systems. The authors propose a system structure for distributed programming systems that support persistent objects and describe how properties such as persistence and recoverability can be implemented. The proposed structure is modular, permitting easy exploitation of any distributed computing facilities provided by the underlying system. An existing system constructed according to the principles espoused here is examined to illustrate the practical utility of the proposed approach to system structuring  相似文献   
932.
A synchronizer is a compiler that transforms a program designed to run in a synchronous network into a program that runs in an asynchronous network. The behavior of a simple synchronizer, which also represents a basic mechanism for distributed computing and for the analysis of marked graphs, was studied by S. Even and S. Rajsbaum (1990) under the assumption that message transmission delays and processing times are constant. We study the behavior of the simple synchronizer when processing times and transmission delays are random. The main performance measure is the rate of a network, i.e., the average number of computational steps executed by a processor in the network per unit time. We analyze the effect of the topology and the probability distributions of the random variables on the behavior of the network. For random variables with exponential distribution, we provide tight (i.e., attainable) bounds and study the effect of a bottleneck processor on the rate  相似文献   
933.
We present the design of E-kernel, an embedding kernel on the Victor V256 message-passing partitionable multiprocessor, developed for the support of program mapping and network reconfiguration. E-kernel supports the embedding of a new network topology onto Victor's 2D mesh and also the embedding of a task graph onto the 2D mesh network or the reconfigured network. In the current implementation, the reconfigured network can be a line or an even-size ring, and the task graphs meshes or tori of a variety of dimensions and shapes or graphs with similar topologies. For application programs having these task graph topologies and that are designed according to the communication model of E-kernel, they can be run without any change on partitions connected by the 2D mesh, line, or ring. Further, E-kernel attempts the communication optimization of these programs on the different networks automatically, thus making both the network topology and the communication optimization attempt completely transparent to the application programs. Many of the embeddings used in E-kernel are optimal or asymptotically optimal (with respect to minimum dilation cost). The implementation of E-kernel translated some of the many theoretical results in graph embeddings into practical tools for program mapping and network reconfiguration in a parallel system. E-kernel is functional on Victor V256. Measurements of E-kernel's performance on V256 are also included  相似文献   
934.
Nonlinear adaptive filters based on a variety of neural network models have been used successfully for system identification and noise-cancellation in a wide class of applications. An important problem in data communications is that of channel equalization, i.e., the removal of interferences introduced by linear or nonlinear message corrupting mechanisms, so that the originally transmitted symbols can be recovered correctly at the receiver. In this paper we introduce an adaptive recurrent neural network (RNN) based equalizer whose small size and high performance makes it suitable for high-speed channel equalization. We propose RNN based structures for both trained adaptation and blind equalization, and we evaluate their performance via extensive simulations for a variety of signal modulations and communication channel models. It is shown that the RNN equalizers have comparable performance with traditional linear filter based equalizers when the channel interferences are relatively mild, and that they outperform them by several orders of magnitude when either the channel's transfer function has spectral nulls or severe nonlinear distortion is present. In addition, the small-size RNN equalizers, being essentially generalized IIR filters, are shown to outperform multilayer perceptron equalizers of larger computational complexity in linear and nonlinear channel equalization cases.  相似文献   
935.
The cascade correlation is a very flexible, efficient and fast algorithm for supervised learning. It incrementally builds the network by adding hidden units one at a time, until the desired input/output mapping is achieved. It connects all the previously installed units to the new unit being added. Consequently, each new unit in effect adds a new layer and the fan-in of the hidden and output units keeps on increasing as more units get added. The resulting structure could be hard to implement in VLSI, because the connections are irregular and the fan-in is unbounded. Moreover, the depth or the propagation delay through the resulting network is directly proportional to the number of units and can be excessive. We have modified the algorithm to generate networks with restricted fan-in and small depth (propagation delay) by controlling the connectivity. Our results reveal that there is a tradeoff between connectivity and other performance attributes like depth, total number of independent parameters, and learning time.  相似文献   
936.
A method is presented for the construction of fixed-order compensators to provide H norm constraint for linear control systems with exogenous disturbances. The method is based on the celebrated bounded-real lemma that predicates the H norm constraint via a Riccati inequality. The synthesis of fixed-order controllers whose dimensions are less than the order of a given plant, is demonstrated by a set of sufficient conditions along with a numerical algorithm.  相似文献   
937.
Low temperature wafer direct bonding   总被引:11,自引:0,他引:11  
A pronounced increase of interface energy of room temperature bonded hydrophilic Si/Si, Si/SiO2, and SiO2/SiO 2 wafers after storage in air at room temperature, 150°C for 10-400 h has been observed. The increased number of OH groups due to a reaction between water and the strained oxide and/or silicon at the interface at temperatures below 110°C and the formation of stronger siloxane bonds above 110°C appear to be the main mechanisms responsible for the increase in the interface energy. After prolonged storage, interface bubbles are detectable by an infrared camera at the Si/Si bonding seam. Desorbed hydrocarbons as well as hydrogen generated by a reaction of water with silicon appear to be the major contents in the bubbles. Design guidelines for low temperature wafer direct bonding technology are proposed  相似文献   
938.
We present an all-aluminum MEMS process (Al-MEMS) for the fabrication of large-gap electrostatic actuators with process steps that are compatible with the future use of underlying, pre-fabricated CMOS control circuitry. The process is purely additive above the substrate as opposed to processes that depend on etching pits into the silicon, and thereby permits a high degree of design freedom. Multilayer aluminum metallization is used with organic sacrificial layers to build up the actuator structures. Oxygen-based dry etching is used to remove the sacrificial layers. While this approach has been previously used by other investigators to fabricate optical modulators and displays, the specific process presented herein has been optimized for driving mechanical actuators with relatively large travels. The process is also intended to provide flexibility for design and future enhancements. For example, the gap height between the actuator and the underlying electrode(s) can be set using an adjustable polyimide sacrificial layer and aluminum “post” deposition step. Several Al-MEMS electrostatic structures designed for use as mechanical actuators are presented as well as some measured actuation characteristics  相似文献   
939.
What is the implication for business when information technology (IT) changes in the workplace without a commensurate change in the composition of business programs educating tomorrow's employees? A survey of MBA graduates forms the basis of this article on the IT skills needed in the marketplace.  相似文献   
940.
Conclusions The generalized Chebyshev inequalities are of independent value in mathematical analysis, probability theory, and other fields. Survivability analysis of elements and systems requires specification of functional probability characteristics-distributions of the current durability point. Probabilistic calculation of survivability of complex systems can be carried out using logical-probabilistic methods [22, 28], because the probabilistic-physical meaning of the distribution of current durability point at the point x is the failure probability of an element (a system) given the deterministic level x of the next shock. The methodology of reliability theory can be updated by focusing on a physical stochastic process, instead of time to failure, as the cause of failure. In conclusion, I would like to thank I. A. Ibragimov for discussion of results and some useful comments. Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 159–166, March–April, 1994.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号