首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   699篇
  免费   13篇
电工技术   3篇
化学工业   148篇
金属工艺   3篇
机械仪表   11篇
建筑科学   29篇
矿业工程   1篇
能源动力   8篇
轻工业   19篇
石油天然气   4篇
无线电   60篇
一般工业技术   107篇
冶金工业   109篇
原子能技术   4篇
自动化技术   206篇
  2022年   9篇
  2021年   4篇
  2020年   5篇
  2019年   11篇
  2018年   6篇
  2017年   9篇
  2016年   14篇
  2015年   7篇
  2014年   21篇
  2013年   34篇
  2012年   22篇
  2011年   29篇
  2010年   34篇
  2009年   31篇
  2008年   23篇
  2007年   37篇
  2006年   32篇
  2005年   24篇
  2004年   22篇
  2003年   18篇
  2002年   20篇
  2001年   21篇
  2000年   11篇
  1999年   11篇
  1998年   18篇
  1997年   17篇
  1996年   14篇
  1995年   5篇
  1994年   8篇
  1993年   9篇
  1992年   10篇
  1991年   6篇
  1990年   11篇
  1989年   12篇
  1988年   10篇
  1987年   12篇
  1986年   14篇
  1985年   17篇
  1984年   15篇
  1983年   14篇
  1982年   11篇
  1981年   10篇
  1980年   4篇
  1979年   7篇
  1978年   7篇
  1977年   5篇
  1976年   6篇
  1975年   2篇
  1973年   3篇
  1972年   3篇
排序方式: 共有712条查询结果,搜索用时 0 毫秒
51.
Network State Estimation and Prediction for Real-Time Traffic Management   总被引:1,自引:0,他引:1  
Advanced Traveler Information Systems (ATIS) and Advanced Traffic Management Systems (ATMS) have the potential to contribute to the solution of the traffic congestion problem. DynaMIT is a real-time system that can be used to generate guidance for travelers. The main principle on which DynaMIT is based is that information should be consistent, and user optimal. Consistency implies that the traffic conditions experienced by the travelers are consistent with the condition assumed in generating the guidance. To generate consistent user optimal information, DynaMIT performs two main functions: state estimation and prediction. A demand simulator and a supply simulator interact to perform these tasks. A case study demonstrates the value of the system.  相似文献   
52.
Synthesis is the automated construction of a system from its specification. The system has to satisfy its specification in all possible environments. The environment often consists of agents that have objectives of their own. Thus, it makes sense to soften the universal quantification on the behavior of the environment and take the objectives of its underlying agents into an account. Fisman et al. introduced rational synthesis: the problem of synthesis in the context of rational agents. The input to the problem consists of temporal logic formulas specifying the objectives of the system and the agents that constitute the environment, and a solution concept (e.g., Nash equilibrium). The output is a profile of strategies, for the system and the agents, such that the objective of the system is satisfied in the computation that is the outcome of the strategies, and the profile is stable according to the solution concept; that is, the agents that constitute the environment have no incentive to deviate from the strategies suggested to them. In this paper we continue to study rational synthesis. First, we suggest an alternative definition to rational synthesis, in which the agents are rational but not cooperative. We call such problem strong rational synthesis. In the strong rational synthesis setting, one cannot assume that the agents that constitute the environment take into account the strategies suggested to them. Accordingly, the output is a strategy for the system only, and the objective of the system has to be satisfied in all the compositions that are the outcome of a stable profile in which the system follows this strategy. We show that strong rational synthesis is 2ExpTime-complete, thus it is not more complex than traditional synthesis or rational synthesis. Second, we study a richer specification formalism, where the objectives of the system and the agents are not Boolean but quantitative. In this setting, the objective of the system and the agents is to maximize their outcome. The quantitative setting significantly extends the scope of rational synthesis, making the game-theoretic approach much more relevant. Finally, we enrich the setting to one that allows coalitions of agents that constitute the system or the environment.  相似文献   
53.
In this paper we demonstrate how genetic algorithms can be used to reverse engineer an evaluation function’s parameters for computer chess. Our results show that using an appropriate expert (or mentor), we can evolve a program that is on par with top tournament-playing chess programs, outperforming a two-time World Computer Chess Champion. This performance gain is achieved by evolving a program that mimics the behavior of a superior expert. The resulting evaluation function of the evolved program consists of a much smaller number of parameters than the expert’s. The extended experimental results provided in this paper include a report on our successful participation in the 2008 World Computer Chess Championship. In principle, our expert-driven approach could be used in a wide range of problems for which appropriate experts are available.  相似文献   
54.
THE IMPORTANCE OF NEUTRAL EXAMPLES FOR LEARNING SENTIMENT   总被引:2,自引:0,他引:2  
Most research on learning to identify sentiment ignores "neutral" examples, learning only from examples of significant (positive or negative) polarity. We show that it is crucial to use neutral examples in learning polarity for a variety of reasons. Learning from negative and positive examples alone will not permit accurate classification of neutral examples. Moreover, the use of neutral training examples in learning facilitates better distinction between positive and negative examples.  相似文献   
55.
This paper introduces a wide-spectrum specification logic νZ. The minimal core logic is extended to a more expressive specification logic which includes a schema calculus similar (but not equivalent) to Z, new additional schema operators, and extensions to programming and program development logics.  相似文献   
56.
57.
Breakdowns in SiO2 have been classified as defect related, due to wear-out and intrinsic. However, techniques to ascertain defect densities and breakdown rates at defects have not been available, nor has the distinction between wear-out and defect- related or intrinsic breakdowns been clearly demonstrated. A particular problem has been the inability to distinguish defect types, i.e. defects having different breakdown rates. Another source of confusion has been the tacit assumption that breakdown field histograms obtained from ramp breakdown tests are independent of the ramp rate, which cannot be valid for finite breakdown rates. We obtained relationships specifying the statistics of breakdown, including the effect of defects. These actually derive from results describing a Markov death process and depend on the time integrals of breakdown rates in defect-free regions and at defects and on parameters describing the defect distributions. For Poisson distributions of the defect, these parameters are the mean number of defects per device for each defect type. Any breakdown test is described by the same relations since the nature of the test enters only through the time integral of the breakdown rates. If a wear-out mechanism is operative, then the breakdown rates will depend on the time explicitly, i.e. not only via the time dependences of the applied field and temperature. procedures for obtaining defect densities and breakdown rates follow from the derived dependence on these quantities of the expectation value of the fraction of devices broken down. Ramp tests at various ramp rates are advantageous for this purpose. The field dependence of the breakdown rates can be extracted directly from the experimental data and no a priori form for this dependence need be assumed. Experimental results obtained from multiple ramp breakdown tests will be presented. The field dependence of the breakdown rates is found to vary significantly from a simple exponential dependence. Following Klein, the effect of fluctuations on the breakdown rates will be considered qualitatively to rationalize their observed field dependence. No explicit time dependence of the breakdown rates is indicated over the range of field covered by the data, implying the absence of wear-out.  相似文献   
58.
In this article we present an algorithm that learns to predict non-deterministically generated strings. The problem of learning to predict non-deterministically generated strings was raised by Dietterich and Michalski (1986). While their objective was to give heuristic techniques that could be used to rapidly and effectively learn to predict a somewhat limited class of strings, our objective is to give an algorithm which, though impractical, is capable of learning to predict a very general class. Our algorithm is meant to provide a general framework within which heuristic techniques can be effectively employed.  相似文献   
59.
The object-process methodology incorporates the system static-structural and dynamic-procedural aspects into a single, unified model. This unification bridges the gap that separates the static, object model from the dynamic, behavior, state, or function-oriented models found in many current object oriented methodologies. In this work we concentrate on the transition from object-process analysis to design within the development of information systems. We use a detailed case study as a running example throughout the paper to demonstrate how the structure-behavior unification, which characterizes object-process analysis, is carried on to object-process design. The case study first applies object-process analysis to perform the analysis stage. The sequence of steps that constitutes the design is then discussed and demonstrated through the case study. The design is divided into two phases: the analysis refinement phase and the implementation-dependent phase. Analysis refinement is concerned with adding details to the analysis results which are beyond the interest of the analysis itself, yet they are not related with a particular implementation. The implementation-dependent phase is concerned with code-level design, which takes place after specific implementation details, such as programming language, data organization, and user interface, have been made during the strategic design.  相似文献   
60.
In classical deterministic scheduling problems, it is assumed that all jobs have to be processed. However, in many practical cases, mostly in highly loaded make-to-order production systems, accepting all jobs may cause a delay in the completion of orders which in turn may lead to high inventory and tardiness costs. Thus, in such systems, the firm may wish to reject the processing of some jobs by either outsourcing them or rejecting them altogether. The field of scheduling with rejection provides schemes for coordinated sales and production decisions by grouping them into a single model. Since scheduling problems with rejection are very interesting both from a practical and a theoretical point of view, they have received a great deal of attention from researchers over the last decade. The purpose of this survey is to offer a unified framework for offline scheduling with rejection by presenting an up-to-date survey of the results in this field. Moreover, we highlight the close connection between scheduling with rejection and other fields of research such as scheduling with controllable processing times and scheduling with due date assignment, and include some new results which we obtained for open problems.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号