首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1296篇
  免费   17篇
  国内免费   2篇
电工技术   25篇
化学工业   350篇
金属工艺   23篇
机械仪表   40篇
建筑科学   35篇
矿业工程   1篇
能源动力   65篇
轻工业   143篇
水利工程   9篇
石油天然气   5篇
无线电   102篇
一般工业技术   224篇
冶金工业   33篇
原子能技术   4篇
自动化技术   256篇
  2025年   1篇
  2024年   21篇
  2023年   28篇
  2022年   77篇
  2021年   82篇
  2020年   48篇
  2019年   75篇
  2018年   53篇
  2017年   54篇
  2016年   60篇
  2015年   48篇
  2014年   43篇
  2013年   78篇
  2012年   90篇
  2011年   99篇
  2010年   74篇
  2009年   72篇
  2008年   61篇
  2007年   66篇
  2006年   26篇
  2005年   27篇
  2004年   29篇
  2003年   14篇
  2002年   13篇
  2001年   9篇
  2000年   2篇
  1999年   8篇
  1998年   8篇
  1997年   4篇
  1996年   8篇
  1995年   3篇
  1994年   2篇
  1993年   3篇
  1992年   4篇
  1991年   1篇
  1990年   2篇
  1989年   1篇
  1988年   1篇
  1987年   3篇
  1986年   1篇
  1985年   1篇
  1984年   3篇
  1982年   3篇
  1981年   3篇
  1979年   1篇
  1976年   1篇
  1975年   1篇
  1972年   1篇
  1957年   1篇
  1953年   1篇
排序方式: 共有1315条查询结果,搜索用时 0 毫秒
91.
We provide a discussion of bounded rationality learning behind traditional learning mechanisms, i.e., Recursive Ordinary Least Squares and Bayesian Learning . These mechanisms lack for many reasons a behavioral interpretation and, following the Simon criticism, they appear to be substantively rational. In this paper, analyzing the Cagan model, we explore two learning mechanisms which appear to be more plausible from a behavioral point of view and somehow procedurally rational: Least Mean Squares learning for linear models and Back Propagation for Artificial Neural Networks . The two algorithms look for a minimum of the variance of the error forecasting by means of a steepest descent gradient procedure. The analysis of the Cagan model shows an interesting result: non-convergence of learning to the Rational Expectations Equilibrium is not due to the restriction to linear learning devices; also Back Propagation learning for Artificial Neural Networks may fail to converge to the Rational Expectations Equilibrium of the model.  相似文献   
92.
Distributed and Parallel Databases - Massive data transfers in modern data-intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability....  相似文献   
93.
Genetic programming researchers have shown a growing interest in the study of gene regulatory networks in the last few years. Our team has also contributed to the field, by defining two systems for the automatic reverse engineering of gene regulatory networks called GRNGen and GeNet. In this paper, we revise this work by describing in detail the two approaches and empirically comparing them. The results we report, and in particular the fact that GeNet can be used on large networks while GRNGen cannot, encourage us to pursue the study of GeNet in the future. We conclude the paper by discussing the main research directions that we are planning to investigate to improve GeNet.  相似文献   
94.
Despite the relevance of the software evolution phase, there are few characterization studies on recurrent evolution growth patterns and on their impact on software properties, such as coupling and cohesion. In this paper, we report a study designed to investigate whether the software evolution categories proposed by Lanza can be used to explain not only the growth of a system in terms of lines of code (LOC), but also in terms of metrics from the Chidamber and Kemerer (CK) object-oriented metrics suite. Our results show that high levels of recall (ranging on average from 52 to 72 %) are achieved when using LOC to predict the evolution of coupling and size. For cohesion, we have achieved smaller recall rates (<27 % on average).  相似文献   
95.
International Journal of Information Security - The complexity of today’s integrated circuit (IC) supply chain, organised in several tiers and including many companies located in different...  相似文献   
96.
The management of a huge and growing amount of information available nowadays makes Automatic Document Classification (ADC), besides crucial, a very challenging task. Furthermore, the dynamics inherent to classification problems, mainly on the Web, make this task even more challenging. Despite this fact, the actual impact of such temporal evolution on ADC is still poorly understood in the literature. In this context, this work concerns to evaluate, characterize and exploit the temporal evolution to improve ADC techniques. As first contribution we highlight the proposal of a pragmatical methodology for evaluating the temporal evolution in ADC domains. Through this methodology, we can identify measurable factors associated to ADC models degradation over time. Going a step further, based on such analyzes, we propose effective and efficient strategies to make current techniques more robust to natural shifts over time. We present a strategy, named temporal context selection, for selecting portions of the training set that minimize those factors. Our second contribution consists of proposing a general algorithm, called Chronos, for determining such contexts. By instantiating Chronos, we are able to reduce uncertainty and improve the overall classification accuracy. Empirical evaluations of heuristic instantiations of the algorithm, named WindowsChronos and FilterChronos, on two real document collections demonstrate the usefulness of our proposal. Comparing them against state-of-the-art ADC algorithms shows that selecting temporal contexts allows improvements on the classification accuracy up to 10%. Finally, we highlight the applicability and the generality of our proposal in practice, pointing out this study as a promising research direction.  相似文献   
97.
Neural Processing Letters - This paper presents an approach to determine a model of superficial tissue temperature dynamics during continuous wave CO $$_2$$ laser irradiation. The main contribution...  相似文献   
98.
99.
System reliability analysis and optimization are important to efficiently utilize available resources and to develop an optimal system design architecture. System reliability optimization has been solved by using optimization techniques including meta-heuristics. Meanwhile, the development of meta-heuristics has been an active research field of the reliability optimization wherein the redundancy, the component reliability, or both are to be determined. In recent years, a broad class of stochastic meta-heuristics, such as simulated annealing, genetic algorithm, tabu search, ant colony, and particle swarm optimization paradigms, has been developed for reliability-redundancy optimization of systems. Recently, a new kind of evolutionary algorithm called Imperialist Competitive Algorithm (ICA) was proposed. The ICA is based on imperialistic competition where the populations are represented by countries, which are classified as imperialists or colonies. However, the trade-off between the exploration (i.e. the global search) and the exploitation (i.e. the local search) of the search space is critical to the success of the classical ICA approach. An improvement in the ICA by implementing an attraction and repulsion concept during the search for better solutions, the AR-ICA approach, is proposed in this paper. Simulations results demonstrates the AR-ICA is an efficient optimization technique, since it obtained promising solutions for the reliability redundancy allocation problem when compared with the previously best-known results of four different benchmarks for the reliability-redundancy allocation problem presented in the literature.  相似文献   
100.
We propose a general framework to incorporate first-order logic (FOL) clauses, that are thought of as an abstract and partial representation of the environment, into kernel machines that learn within a semi-supervised scheme. We rely on a multi-task learning scheme where each task is associated with a unary predicate defined on the feature space, while higher level abstract representations consist of FOL clauses made of those predicates. We re-use the kernel machine mathematical apparatus to solve the problem as primal optimization of a function composed of the loss on the supervised examples, the regularization term, and a penalty term deriving from forcing real-valued constraints deriving from the predicates. Unlike for classic kernel machines, however, depending on the logic clauses, the overall function to be optimized is not convex anymore. An important contribution is to show that while tackling the optimization by classic numerical schemes is likely to be hopeless, a stage-based learning scheme, in which we start learning the supervised examples until convergence is reached, and then continue by forcing the logic clauses is a viable direction to attack the problem. Some promising experimental results are given on artificial learning tasks and on the automatic tagging of bibtex entries to emphasize the comparison with plain kernel machines.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号