首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2805篇
  免费   101篇
  国内免费   10篇
电工技术   18篇
综合类   1篇
化学工业   837篇
金属工艺   42篇
机械仪表   61篇
建筑科学   37篇
矿业工程   3篇
能源动力   229篇
轻工业   209篇
水利工程   11篇
石油天然气   7篇
无线电   268篇
一般工业技术   504篇
冶金工业   178篇
原子能技术   23篇
自动化技术   488篇
  2023年   39篇
  2022年   98篇
  2021年   122篇
  2020年   99篇
  2019年   104篇
  2018年   119篇
  2017年   95篇
  2016年   73篇
  2015年   62篇
  2014年   105篇
  2013年   340篇
  2012年   105篇
  2011年   99篇
  2010年   104篇
  2009年   101篇
  2008年   97篇
  2007年   99篇
  2006年   91篇
  2005年   70篇
  2004年   39篇
  2003年   43篇
  2002年   37篇
  2001年   29篇
  2000年   31篇
  1999年   39篇
  1998年   62篇
  1997年   47篇
  1996年   43篇
  1995年   40篇
  1994年   39篇
  1993年   29篇
  1992年   34篇
  1991年   27篇
  1990年   22篇
  1989年   34篇
  1988年   27篇
  1987年   16篇
  1986年   18篇
  1985年   45篇
  1984年   23篇
  1983年   31篇
  1982年   25篇
  1981年   19篇
  1980年   8篇
  1979年   12篇
  1978年   11篇
  1977年   11篇
  1976年   12篇
  1975年   9篇
  1971年   6篇
排序方式: 共有2916条查询结果,搜索用时 15 毫秒
71.
Determining order relationship between events of a distributed computation is a fundamental problem in distributed systems which has applications in many areas including debugging, visualization, checkpointing and recovery. Fidge/Mattern’s vector-clock mechanism captures the order relationship using a vector of size N in a system consisting of N processes. As a result, it incurs message and space overhead of N integers. Many distributed applications use synchronous messages for communication. It is therefore natural to ask whether it is possible to reduce the timestamping overhead for such applications. In this paper, we present a new approach for timestamping messages and events of a synchronously ordered computation, that is, when processes communicate using synchronous messages. Our approach depends on decomposing edges in the communication topology into mutually disjoint edge groups such that each edge group either forms a star or a triangle. We show that, to accurately capture the order relationship between synchronous messages, it is sufficient to use one component per edge group in the vector instead of one component per process. Timestamps for events are only slightly bigger than timestamps for messages. Many common communication topologies such as ring, grid and hypercube can be decomposed into edge groups, resulting in almost 50% improvement in both space and communication overheads. We prove that the problem of computing an optimal edge decomposition of a communication topology is NP-complete in general. We also present a heuristic algorithm for computing an edge decomposition whose size is within a factor of two of the optimal. We prove that, in the worst case, it is not possible to timestamp messages of a synchronously ordered computation using a vector containing fewer than components when N ≥ 2. Finally, we show that messages in a synchronously ordered computation can always be timestamped in an offline manner using a vector of size at most . An earlier version of this paper appeared in 2002 Proceedings of the IEEE International Conference on Distributed Computing Systems (ICDCS). The author V. K. Garg was supported in part by the NSF Grants ECS-9907213, CCR-9988225, an Engineering Foundation Fellowship. This work was done while the author C. Skawratananond was a Ph.D. student at the University of Texas at Austin.  相似文献   
72.
Most fingerprint-based biometric systems store the minutiae template of a user in the database. It has been traditionally assumed that the minutiae template of a user does not reveal any information about the original fingerprint. In this paper, we challenge this notion and show that three levels of information about the parent fingerprint can be elicited from the minutiae template alone, viz., 1) the orientation field information, 2) the class or type information, and 3) the friction ridge structure. The orientation estimation algorithm determines the direction of local ridges using the evidence of minutiae triplets. The estimated orientation field, along with the given minutiae distribution, is then used to predict the class of the fingerprint. Finally, the ridge structure of the parent fingerprint is generated using streamlines that are based on the estimated orientation field. Line integral convolution is used to impart texture to the ensuing ridges, resulting in a ridge map resembling the parent fingerprint. The salient feature of this noniterative method to generate ridges is its ability to preserve the minutiae at specified locations in the reconstructed ridge map. Experiments using a commercial fingerprint matcher suggest that the reconstructed ridge structure bears close resemblance to the parent fingerprint  相似文献   
73.
In Garg et al. (1999) and Garg (1992) the formalism of probabilistic languages for modeling the stochastic qualitative behavior of discrete event systems (DESs) was introduced. In this paper, we study their supervisory control where the control is exercised by dynamically disabling certain controllable events thereby nulling the occurrence probabilities of disabled events, and increasing the occurrence probabilities of enabled events proportionately. This is a special case of “probabilistic supervision” introduced in Lawford and Wonham (1993). The control objective is to design a supervisor such that the controlled system never executes any illegal traces (their occurrence probability is zero), and legal traces occur with minimum prespecified occurrence probabilities. In other words, the probabilistic language of the controlled system lies within a prespecified range, where the upper bound is a “nonprobabilistic language” representing a legality constraint. We provide a condition for the existence of a supervisor. We also present an algorithm to test this existence condition when the probabilistic languages are regular (so that they admit probabilistic automata representation with finitely many states). Next, we give a technique to compute a maximally permissive supervisor online  相似文献   
74.
An analysis of 258 papers published from Singapore and covered inScience Citation Index (SCI) 1979 and 1980 indicates that (1) much of R&D in Singapore pertains to medical research, (2) almost all the papers are published in English language periodicals published from the western world, (3) nearly two-thirds of Singapore's publication output is accounted for by the University of Singapore, and (4) by and large papers from Singapore are rarely cited, even if many of them have appeared in journals having impact factor greater than one.  相似文献   
75.
A rapid screening system for heterogeneous catalyst discovery has been developed by coupling an in-house designed and fabricated high temperature vapor phase pulse reactor on-line to a GC-MS. The incorporation of gas chromatography for separation of the products with the mass spectrometry system allowed simultaneous identification and determination of reaction products and substrate conversion. This system was employed to study the vapor phase catalytic hydride transfer reduction (CHTR) of nitrobenzene with methanol as hydrogen donor on an MgO catalyst as a model reaction. Structural information of all the by-products that were formed was useful to understand the reaction mechanism. The products obtained with the new screening technique were in good agreement with conventional bench scale experiments. The rapid online screening provided an efficient methodology for optimization of reaction conditions such as catalyst loading, reaction temperature, and mole ratios. Response Surface Methodology (RSM) was used to optimize the conversion of reactants and selectivity of products.  相似文献   
76.
77.
In component‐based development, software systems are built by assembling components already developed and prepared for integration. To estimate the quality of components, complexity, reusability, dependability, and maintainability are the key aspects. The quality of an individual component influences the quality of the overall system. Therefore, there is a strong need to select the best quality component, both from functional and nonfunctional aspects. The present paper produces a critical analysis of metrics for various quality aspects for components and component‐based systems. These aspects include four main quality factors: complexity, dependency, reusability, and maintainability. A systematic study is applied to find as much literature as possible. A total of 49 papers were found suitable after a defined search criteria. The analysis provided in this paper has a different objective as we focused on efficiency and practical ability of the proposed approach in the selected papers. The various key attributes from these two are defined. Each paper is evaluated based on the various key parameters viz. metrics definition, implementation technique, validation, usability, data source, comparative analysis, practicability, and extendibility. The paper critically examines various quality aspects and their metrics for component‐based systems. In some papers, authors have also compared the results with other techniques. For characteristics like complexity and dependency, most of the proposed metrics are analytical. Soft computing and evolutionary approaches are either not being used or much less explored so far for these aspects, which may be the future concern for the researchers. In addition, hybrid approaches like neuro‐fuzzy, neuro‐genetic, etc., may also be examined for evaluation of these aspects. However, to conclude that one particular technique is better than others may not be appropriate. It may be true for one characteristic by considering different set of inputs and dataset but may not be true for the same with different inputs. The intension in the proposed work is to give a score for each metric proposed by the researchers based on the selected parameters, but certainly not to criticize any research contribution by authors. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
78.
In this paper,we propose a decentralized parallel computation model for global optimization using interval analysis.The model is adaptive to any number of processors and the workload is automatically and evenly distributed among all processors by alternative message passing.The problems received by each processor are processed based on their local dominance properties,which avoids unnecessary interval evaluations.Further,the problem is treated as a whole at the beginning of computation so that no initial decomposition scheme is required.Numerical experiments indicate that the model works well and is stable with different number of parallel processors,distributes the load evenly among the processors,and provides an impressive speedup,especially when the problem is time-consuming to solve.  相似文献   
79.
Determining service configurations is essential for effective service management. In this paper we describe a model-driven approach for service configuration auto-discovery. We develop metrics for performance and scalability analysis of such auto-discovery mechanisms. Our approach addresses several problems in auto-discovery: specification of what services to discover, how to efficiently distribute service discovery, and how to match instances of services into related groups. We use object-oriented models for discovery specifications, a flexible bus-based architecture for distribution and communication, and a novel multi-phased, instance matching approach. We have applied our approach to typical e-commerce services, Enterprise Resource Planning applications, like SAP, and Microsoft Exchange services running on a mixture of Windows and Unix platforms. The main contribution of this work is the flexibility of our models, architecture and algorithms to address discovery of a multitude of services.  相似文献   
80.
In this paper we present a data parallel volume rendering algorithm that possesses numerous advantages over prior published solutions. Volume rendering is a three-dimensional graphics rendering algorithm that computes views of sampled medical and simulation data, but has been much slower than other graphics algorithms because of the data set sizes and the computational complexity. Our algorithm usespermutation warpingto achieve linear speedup (run time is O(S/P) forPprocessors whenP\= O(S/logS) forS\=n3samples), linear storage (O(S)) for large data sets, arbitrary view directions, and high-quality filters. We derived a new processor permutation assignment of five passes (our prior known solution was eight passes), and a new parallel compositing technique that is essential for scaling linearly on machines that have more processors than view rays to process (P>n2). We show a speedup of 15.7 for a 16k processor over a 1k processor MasPar MP-1 (16 is linear) and two frames/second with a 1283volume and trilinear view reconstruction. In addition, we demonstrate volume sizes of 2563, constant run time over angles 5 to 75°, filter quality comparisons, and communication congestion of just 19 to 29\%.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号