首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2462篇
  免费   95篇
  国内免费   80篇
电工技术   61篇
综合类   114篇
化学工业   182篇
金属工艺   90篇
机械仪表   203篇
建筑科学   109篇
矿业工程   167篇
能源动力   100篇
轻工业   65篇
水利工程   28篇
石油天然气   350篇
武器工业   14篇
无线电   146篇
一般工业技术   206篇
冶金工业   49篇
原子能技术   12篇
自动化技术   741篇
  2024年   2篇
  2023年   29篇
  2022年   45篇
  2021年   59篇
  2020年   64篇
  2019年   45篇
  2018年   45篇
  2017年   63篇
  2016年   79篇
  2015年   81篇
  2014年   143篇
  2013年   161篇
  2012年   127篇
  2011年   150篇
  2010年   107篇
  2009年   128篇
  2008年   92篇
  2007年   124篇
  2006年   130篇
  2005年   136篇
  2004年   119篇
  2003年   89篇
  2002年   78篇
  2001年   61篇
  2000年   62篇
  1999年   82篇
  1998年   72篇
  1997年   67篇
  1996年   51篇
  1995年   49篇
  1994年   24篇
  1993年   14篇
  1992年   19篇
  1991年   8篇
  1990年   9篇
  1989年   8篇
  1988年   4篇
  1987年   3篇
  1986年   1篇
  1985年   2篇
  1984年   2篇
  1981年   1篇
  1980年   1篇
  1977年   1篇
排序方式: 共有2637条查询结果,搜索用时 15 毫秒
61.
In this paper, we develop the idea of a universal anytime intelligence test. The meaning of the terms “universal” and “anytime” is manifold here: the test should be able to measure the intelligence of any biological or artificial system that exists at this time or in the future. It should also be able to evaluate both inept and brilliant systems (any intelligence level) as well as very slow to very fast systems (any time scale). Also, the test may be interrupted at any time, producing an approximation to the intelligence score, in such a way that the more time is left for the test, the better the assessment will be. In order to do this, our test proposal is based on previous works on the measurement of machine intelligence based on Kolmogorov complexity and universal distributions, which were developed in the late 1990s (C-tests and compression-enhanced Turing tests). It is also based on the more recent idea of measuring intelligence through dynamic/interactive tests held against a universal distribution of environments. We discuss some of these tests and highlight their limitations since we want to construct a test that is both general and practical. Consequently, we introduce many new ideas that develop early “compression tests” and the more recent definition of “universal intelligence” in order to design new “universal intelligence tests”, where a feasible implementation has been a design requirement. One of these tests is the “anytime intelligence test”, which adapts to the examinee's level of intelligence in order to obtain an intelligence score within a limited time.  相似文献   
62.
The stego image quality produced by the histogram-shifting based reversible data hiding technique is high; however, it often suffers from lower embedding capacity compared to other types of reversible data hiding techniques. In 2009, Tsai et al. solved this problem by exploiting the similarity of neighboring pixels to construct a histogram of prediction errors; data embedding is done by shifting the error histogram. However, Tsai et al.’s method does not fully exploit the correlation of the neighboring pixels. In this paper, a set of basic pixels is employed to improve the prediction accuracy, thereby increasing the payload. To further improve the image quality, a threshold is used to select only low-variance blocks to join the embedding process. According to the experimental results, the proposed method provides a better or comparable stego image quality than Tsai et al.’s method and other existing reversible data hiding methods under the same payload.  相似文献   
63.
A codesign approach combining predictive control compensation and network scheduling is presented in this paper to overcome the adverse influences of stochastic time delays and packet losses encountered in network-based real-time control systems. The state estimation and control prediction compensation algorithms are used for the random network delays in the feedback and forward channels, and the stability criteria are analyzed. The proper sampling rate is given with network scheduling to meet the desired system performance, while the network-induced delay is tolerated. Simulations show that the codesign approach works well with the bounded network delay.  相似文献   
64.
One of the major advantages of orthonormal basis filter (OBF) models is that they are parsimonious in parameters. However, this is true only if appropriate type of filter and reasonably accurate dominant poles of the system are used in developing the model. An arbitrary choice of filter type and poles may lead to non-parsimonious model. While the selection of the type of filter may be simple if the damping characteristics of the system are known, finding good estimates of the dominant pole(s) of the system is not a trivial task. Another important advantage of OBF model is the fact that time delays can be easily estimated and incorporated into the model. Currently, time delay of the system is estimated from the step response of the OBF model using the tangent method. While this method is effective in estimating the time delay of systems that can be accurately modeled by first order plus time delay (FOPTD) models, the accuracy is low for systems with second- and higher-order dynamics. In this paper, a scheme is proposed that will result in parsimonious OBF model and a better estimate of time delay starting from an arbitrary set of poles.  相似文献   
65.
Packages are important high-level organizational units for large object-oriented systems. Package-level metrics characterize the attributes of packages such as size, complexity, and coupling. There is a need for empirical evidence to support the collection of these metrics and using them as early indicators of some important external software quality attributes. In this paper, three suites of package-level metrics (Martin, MOOD and CK) are evaluated and compared empirically in predicting the number of pre-release faults and the number of post-release faults in packages. Eclipse, one of the largest open source systems, is used as a case study. The results indicate that the prediction models that are based on Martin suite are more accurate than those that are based on MOOD and CK suites across releases of Eclipse.  相似文献   
66.
Gaussian process (GP) models form an emerging methodology for modelling nonlinear dynamic systems which tries to overcome certain limitations inherent to traditional methods such as e.g. neural networks (ANN) or local model networks (LMN).The GP model seems promising for three reasons. First, less training parameters are needed to parameterize the model. Second, the variance of the model's output depending on data positioning is obtained. Third, prior knowledge, e.g. in the form of linear local models can be included into the model. In this paper the focus is on GP with incorporated local models as the approach which could replace local models network.Much of the effort up to now has been spent on the development of the methodology of the GP model with included local models, while no application and practical validation has yet been carried out. The aim of this paper is therefore twofold. The first aim is to present the methodology of the GP model identification with emphasis on the inclusion of the prior knowledge in the form of linear local models. The second aim is to demonstrate practically the use of the method on two higher order dynamical systems, one based on simulation and one based on measurement data.  相似文献   
67.
In this paper we study the problem of estimating the possibly non-homogeneous material coefficients inside a physical system, from transient excitations and measurements made in a few points on the boundary. We assume there is available an adequate Finite Element (FEM) model of the system, whose distributed physical parameters must be estimated from the experimental data.We propose a space–time localization approach that gives a better conditioned estimation problem, without the need of an expensive regularization. Some experimental results obtained on an elastic system with random coefficients are given.  相似文献   
68.
There are many dynamic multi-objective optimization problems (DMOPs) in real-life engineering applications whose objectives change over time. After an environmental change occurs, prediction strategies are commonly used in dynamic multi-objective optimization algorithms to find the new Pareto optimal set (POS). Being able to make more accurate prediction means the algorithm requires fewer computational resources to make the population approximate to the Pareto optimal front (POF). This paper proposes a hybrid diversity maintenance method to improve prediction accuracy. The method consists of three steps, which are implemented after an environmental change. The first step, based on the moving direction of the center points, uses the prediction to relocate a number of solutions close to the new Pareto front. On the basis of self-defined minimum and maximum points of the POS in this paper, the second step applies the gradual search to produce some well-distributed solutions in the decision space so as to compensate for the inaccuracy of the first step, simultaneously and further enhancing the convergence and diversity of the population. In the third step, some diverse individuals are randomly generated within the region of next probable POS, which prompts the diversity of the population. Eventually the prediction becomes more accurate as the solutions with good convergence and diversity are selected after the non-dominated sort [1] on the combined solutions generated by the three steps. Compared with three other prediction methods on a series of test instances, our method is very competitive in convergence and diversity as well as the speed at which it responds to environmental changes.  相似文献   
69.
70.
Dagstuhl seminar no. 10102 on discrete event logistic systems recognized a network of persistent models to be a “Grand Challenge.” Such on-line model network will offer an infrastructure that facilitates the management of logistic operations. This ambition to create a network of persistent models implies a radical shift for model design activities as the objective is an infrastructure rather than application-specific solutions. In particular, model developers can no longer assume that they know what their model will be used for. It is no longer possible to design for the expected.This paper presents insights in model development and design in the absence of precise knowledge concerning a model's usage. Basically, model developers may solely rely on the presence of the real-world counterpart mirrored by their model and a general idea about the nature of the application (e.g. coordination of logistic operations). When the invariants of their real-world counterpart suffice for models to be valid, these models become reusable and integrate-able. As these models remain valid under a wide range of situations, they become multi-purpose and durable resources rather than single-purpose short-lived components or legacy, which is even worse.Moreover and more specifically, the paper describes how to build models that allow their users to generate predictions in unexpected situations and atypical conditions. Referring to previous work, the paper concisely discusses how these predictions can be generated starting from the models. This prediction-generating technology is currently being transferred into an industrial MES.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号