首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   88907篇
  免费   1257篇
  国内免费   451篇
电工技术   840篇
综合类   2335篇
化学工业   12894篇
金属工艺   4894篇
机械仪表   3220篇
建筑科学   2293篇
矿业工程   566篇
能源动力   1469篇
轻工业   4108篇
水利工程   1337篇
石油天然气   378篇
无线电   9720篇
一般工业技术   17240篇
冶金工业   2884篇
原子能技术   323篇
自动化技术   26114篇
  2023年   113篇
  2022年   324篇
  2021年   398篇
  2020年   301篇
  2019年   330篇
  2018年   14693篇
  2017年   13597篇
  2016年   10220篇
  2015年   791篇
  2014年   539篇
  2013年   711篇
  2012年   3417篇
  2011年   9659篇
  2010年   8516篇
  2009年   5758篇
  2008年   6905篇
  2007年   7896篇
  2006年   238篇
  2005年   1296篇
  2004年   1192篇
  2003年   1234篇
  2002年   585篇
  2001年   136篇
  2000年   210篇
  1999年   91篇
  1998年   118篇
  1997年   86篇
  1996年   85篇
  1995年   61篇
  1994年   35篇
  1993年   42篇
  1992年   41篇
  1991年   47篇
  1987年   30篇
  1986年   30篇
  1984年   30篇
  1983年   26篇
  1969年   25篇
  1968年   43篇
  1967年   34篇
  1966年   47篇
  1965年   46篇
  1963年   28篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   35篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
931.
An implicit tenet of modern search heuristics is that there is a mutually exclusive balance between two desirable goals: search diversity (or distribution), i.e., search through a maximum number of distinct areas, and, search intensity, i.e., a maximum search exploitation within each specific area. We claim that the hypothesis that these goals are mutually exclusive is false in parallel systems. We argue that it is possible to devise methods that exhibit high search intensity and high search diversity during the whole algorithmic execution. It is considered how distance metrics, i.e., functions for measuring diversity (given by the minimum number of local search steps between two solutions) and coordination policies, i.e., mechanisms for directing and redirecting search processes based on the information acquired by the distance metrics, can be used together to integrate a framework for the development of advanced collective search methods that present such desiderata of search intensity and search diversity under simultaneous coexistence. The presented model also avoids the undesirable occurrence of a problem we refer to as the ‘ergometric bike phenomenon’. Finally, this work is one of the very few analysis accomplished on a level of meta-meta-heuristics, because all arguments are independent of specific problems handled (such as scheduling, planning, etc.), of specific solution methods (such as genetic algorithms, simulated annealing, tabu search, etc.) and of specific neighborhood or genetic operators (2-opt, crossover, etc.).  相似文献   
932.
This paper presents a general algorithmic framework for computing the IPA derivatives of sample performance functions defined on networks of fluid queues. The underlying network-model consists of bi-layered hybrid dynamical systems with continuous-time dynamics at the lower layer and discrete-event dynamics at the upper layer. The linearized system, computed from the sample path via a discrete-event process, yields fairly simple algorithms for the IPA derivatives. As an application-example, the paper discusses loss and workload performance functions in a tandem network with congestion control, subjected to signal delays.  相似文献   
933.
Finding the optimal design for a discrete event dynamic system (DEDS) is in general difficult due to the large search space and the simulation-based performance evaluation. Various heuristics have been developed to find good designs. An important question is how to quantify the goodness of the heuristic designs. Inspired by the Ordinal Optimization, which has become an important tool for optimizing DEDS, we provide a method which can quantify the goodness of the design. By comparing with a set of designs that are uniformly sampled, we measure the ordinal performances of heuristic designs, i.e., we quantify the ranks of all (or some of) the heuristic designs among all the designs in the entire search space. The mathematical tool we use is the Hypothesis Testing, and the probability of making Type II error in the quantification is controlled to be under a very low level. The method can be used both when the performances of the designs can be accurately evaluated and when such performances are estimated by a crude but computationally easy model. The method can quantify both heuristics that output a single design and that output a set of designs. The method is demonstrated through numerical examples.  相似文献   
934.
Failure diagnosability has been widely studied for discrete event system (DES) models because of modeling simplicity and computational efficiency due to abstraction. In the literature it is often held that for diagnosability, such models can be used not only for systems that fall naturally in the class of DES but also for the ones traditionally treated as continuous variable dynamic systems. A class of algorithms for failure diagnosability of DES models has been successfully developed for systems where fairness is not a part of the model. These algorithms are based on detecting cycles in the normal and the failure model that look identical. However, there exist systems with all transitions fair where the diagnosability condition that hinges upon this feature renders many failures non-diagnosable although they may actually be diagnosable by transitions out of a cycle. Hence, the diagnosability conditions based on cycle detection need to be modified to hold for many real-world systems where all transitions are fair. In this work, however, it is shown by means of an example that a system may have some transitions fair and some unfair. A new failure diagnosability mechanism is proposed for DES models with both fair and unfair transitions. Time complexity for deciding diagnosability of DES models with fair and unfair transitions is analyzed and compared with the time complexities of other DES diagnosability analysis methods reported in the literature.  相似文献   
935.
We study approaches that fit a linear combination of basis functions to the continuation value function of an optimal stopping problem and then employ a greedy policy based on the resulting approximation. We argue that computing weights to maximize expected payoff of the greedy policy or to minimize expected squared-error with respect to an invariant measure is intractable. On the other hand, certain versions of approximate value iteration lead to policies competitive with those that would result from optimizing the latter objective.  相似文献   
936.
We consider Discrete Event Systems (DES) involving tasks with real-time constraints and seek to control processing times so as to minimize a cost function subject to each task meeting its own constraint. It has been shown that the off-line version of this problem can be efficiently solved by the Critical Task Decomposition Algorithm (CTDA) (Mao et al., IEEE Trans Mobile Comput 6(6):678–688, 2007). In the on-line version, random task characteristics (e.g., arrival times) are not known in advance. To bypass this difficulty, worst-case analysis may be used. This, however, does not make use of probability distributions and results in an overly conservative solution. In this paper, we develop a new approach which does not rely on worst-case analysis but provides a “best solution in probability” efficiently obtained by estimating the probability distribution of sample-path-optimal solutions. We introduce a condition termed “non-singularity” under which the best solution in probability leads to the on-line optimal control. Numerical examples are included to illustrate our results and show substantial performance improvements over worst-case analysis.  相似文献   
937.
938.
Improved PLSOM algorithm   总被引:1,自引:1,他引:0  
The original Parameter-Less Self-Organising Map (PLSOM) algorithm was introduced as a solution to the problems the Self-Organising Map (SOM) encounters when dealing with certain types of mapping tasks. Unfortunately the PLSOM suffers from over-sensitivity to outliers and over-reliance on the initial weight distribution. The PLSOM2 algorithm is introduced to address these problems with the PLSOM. PLSOM2 is able to cope well with outliers without exhibiting the problems associated with the standard PLSOM algorithm. The PLSOM2 requires very little computational overhead compared to the standard PLSOM, thanks to an efficient method of approximating the diameter of the inputs, and does not rely on a priori knowledge of the training input space. This paper provides a discussion of the reasoning behind the PLSOM2 and experimental results showing its effectiveness for mapping tasks.  相似文献   
939.
Translating between dissimilar languages requires an account of the use of divergent word orders when expressing the same semantic content. Reordering poses a serious problem for statistical machine translation systems and has generated a considerable body of research aimed at meeting its challenges. Direct evaluation of reordering requires automatic metrics that explicitly measure the quality of word order choices in translations. Current metrics, such as BLEU, only evaluate reordering indirectly. We analyse the ability of current metrics to capture reordering performance. We then introduce permutation distance metrics as a direct method for measuring word order similarity between translations and reference sentences. By correlating all metrics with a novel method for eliciting human judgements of reordering quality, we show that current metrics are largely influenced by lexical choice, and that they are not able to distinguish between different reordering scenarios. Also, we show that permutation distance metrics correlate very well with human judgements, and are impervious to lexical differences.  相似文献   
940.
Recent advances in statistical machine translation have used approximate beam search for NP-complete inference within probabilistic translation models. We present an alternative approach of sampling from the posterior distribution defined by a translation model. We define a novel Gibbs sampler for sampling translations given a source sentence and show that it effectively explores this posterior distribution. In doing so we overcome the limitations of heuristic beam search and obtain theoretically sound solutions to inference problems such as finding the maximum probability translation and minimum risk training and decoding.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号