全文获取类型
收费全文 | 1797篇 |
免费 | 70篇 |
国内免费 | 1篇 |
专业分类
电工技术 | 38篇 |
化学工业 | 621篇 |
金属工艺 | 12篇 |
机械仪表 | 48篇 |
建筑科学 | 67篇 |
能源动力 | 55篇 |
轻工业 | 222篇 |
水利工程 | 8篇 |
石油天然气 | 2篇 |
武器工业 | 1篇 |
无线电 | 127篇 |
一般工业技术 | 267篇 |
冶金工业 | 52篇 |
原子能技术 | 20篇 |
自动化技术 | 328篇 |
出版年
2024年 | 3篇 |
2023年 | 11篇 |
2022年 | 107篇 |
2021年 | 95篇 |
2020年 | 45篇 |
2019年 | 44篇 |
2018年 | 50篇 |
2017年 | 32篇 |
2016年 | 54篇 |
2015年 | 51篇 |
2014年 | 70篇 |
2013年 | 125篇 |
2012年 | 121篇 |
2011年 | 122篇 |
2010年 | 100篇 |
2009年 | 79篇 |
2008年 | 79篇 |
2007年 | 93篇 |
2006年 | 75篇 |
2005年 | 58篇 |
2004年 | 51篇 |
2003年 | 51篇 |
2002年 | 43篇 |
2001年 | 23篇 |
2000年 | 22篇 |
1999年 | 26篇 |
1998年 | 22篇 |
1997年 | 28篇 |
1996年 | 18篇 |
1995年 | 17篇 |
1994年 | 24篇 |
1993年 | 17篇 |
1992年 | 11篇 |
1991年 | 6篇 |
1990年 | 5篇 |
1989年 | 5篇 |
1988年 | 3篇 |
1987年 | 6篇 |
1986年 | 4篇 |
1984年 | 13篇 |
1983年 | 6篇 |
1982年 | 6篇 |
1980年 | 9篇 |
1979年 | 3篇 |
1978年 | 7篇 |
1977年 | 6篇 |
1976年 | 3篇 |
1971年 | 3篇 |
1967年 | 2篇 |
1966年 | 2篇 |
排序方式: 共有1868条查询结果,搜索用时 0 毫秒
11.
12.
Fabrizio Angiulli Rachel Ben-Eliyahu – Zohary Luigi Palopoli 《Artificial Intelligence》2008,172(16-17):1837-1872
Default logics are usually used to describe the regular behavior and normal properties of domain elements. In this paper we suggest, conversely, that the framework of default logics can be exploited for detecting outliers. Outliers are observations expressed by sets of literals that feature unexpected semantical characteristics. These sets of literals are selected among those explicitly embodied in the given knowledge base. Hence, essentially we perceive outlier detection as a knowledge discovery technique. This paper defines the notion of outlier in two related formalisms for specifying defaults: Reiter's default logic and extended disjunctive logic programs. For each of the two formalisms, we show that finding outliers is quite complex. Indeed, we prove that several versions of the outlier detection problem lie over the second level of the polynomial hierarchy. We believe that a thorough complexity analysis, as done here, is a useful preliminary step towards developing effective heuristics and exploring tractable subsets of outlier detection problems. 相似文献
13.
Many time-critical applications require predictable performance in the presence of failures. This paper considers a distributed system with independent periodic tasks which can checkpoint their state on some reliable medium in order to handle failures. The problem of preemptively scheduling a set of such tasks is discussed where every occurrence of a task has to be completely executed before the next occurrence of the same task can start. Efficient scheduling algorithms are proposed which yield sub-optimal schedules when there is provision for fault-tolerance. The performance of the solutions proposed is evaluated in terms of the number of processors and the cost of the checkpoints needed. Moreover, analytical studies are used to reveal interesting trade-offs associated with the scheduling algorithms.This work has been supported by grants from the Italian Ministero dell'Università e della Ricerca Scientifica e Tecnologica and the Consiglio Nazionale delle Ricerche-Progetto Finalizzato Sistemi Informatici e Calcolo Parallelo. 相似文献
14.
Gian Luigi Angrisani Piriya Taptimthong Susanne Elisabeth Thürer Christian Klose Hans Jürgen Maier Marc Christopher Wurz Kai Möhwald 《Advanced Engineering Materials》2018,20(9)
15.
This article proposes an optimization–simulation model for planning the transport of supplies to large public infrastructure
works located in congested urban areas. The purpose is to minimize their impact on the environment and on private transportation
users on the local road network. To achieve this goal, the authors propose and solve an optimization problem for minimizing
the total system cost made up of operating costs for various alternatives for taking supplies to the worksite and the costs
supported by private vehicle users as a result of increased congestion due to the movement of heavy goods vehicles transporting
material to the worksite. The proposed optimization problem is a bi-level Math Program model. The upper level defines the
total cost of the system, which is minimized taking into account environmental constraints on atmospheric and noise pollution.
The lower level defines the optimization problem representing the private transportation user behavior, assuming they choose
the route that minimizes their total individual journey costs. Given the special characteristics of the problem, a heuristic
algorithm is proposed for finding optimum solutions. Both the model developed and the specific solution algorithm are applied
to the real case of building a new port at Laredo (Northern Spain). A series of interesting conclusions are obtained from
the corresponding sensitivity analysis. 相似文献
16.
Background
COSMIC Function Points and traditional Function Points (i.e., IFPUG Function Points and more recent variation of Function Points, such as NESMA and FISMA) are probably the best known and most widely used Functional Size Measurement methods. The relationship between the two kinds of Function Points still needs to be investigated. If traditional Function Points could be accurately converted into COSMIC Function Points and vice versa, then, by measuring one kind of Function Points, one would be able to obtain the other kind of Function Points, and one might measure one or the other kind interchangeably. Several studies have been performed to evaluate whether a correlation or a conversion function between the two measures exists. Specifically, it has been suggested that the relationship between traditional Function Points and COSMIC Function Points may not be linear, i.e., the value of COSMIC Function Points seems to increase more than proportionally to an increase of traditional Function Points.Objective
This paper aims at verifying this hypothesis using available datasets that collect both FP and CFP size measures.Method
Rigorous statistical analysis techniques are used, specifically Piecewise Linear Regression, whose applicability conditions are systematically checked. The Piecewise Linear Regression curve is a series of interconnected segments. In this paper, we focused on Piecewise Linear Regression curves composed of two segments. We also used Linear and Parabolic Regressions, to check if and to what extent Piecewise Linear Regression may provide an advantage over other regression techniques. We used two categories of regression techniques: Ordinary Least Squares regression is based on the usual minimization of the sum of squares of the residuals, or, equivalently, on the minimization of the average squared residual; Least Median of Squares regression is a robust regression technique that is based on the minimization of the median squared residual. Using a robust regression technique helps filter out the excessive influence of outliers.Results
It appears that the analysis of the relationship between traditional Function Points and COSMIC Function Points based on the aforementioned data analysis techniques yields valid significant models. However, different results for the various available datasets are achieved. In practice, we obtained statistically valid linear, piecewise linear, and non-linear conversion formulas for several datasets. In general, none of these is better than the others in a statistically significant manner.Conclusions
Practitioners interested in the conversion of FP measures into CFP (or vice versa) cannot just pick a conversion model and be sure that it will yield the best results. All the regression models we tested provide good results with some datasets. In practice, all the models described in the paper - in particular, both linear and non-linear ones - should be evaluated in order to identify the ones that are best suited for the specific dataset at hand. 相似文献17.
Inspired by recent work on robust and fast computation of 3D Local Reference Frames (LRFs), we propose a novel pipeline for coarse registration of 3D point clouds. Key to the method are: (i) the observation that any two corresponding points endowed with an LRF provide a hypothesis on the rigid motion between two views, (ii) the intuition that feature points can be matched based solely on cues directly derived from the computation of the LRF, (iii) a feature detection approach relying on a saliency criterion which captures the ability to establish an LRF repeatably. Unlike related work in literature, we also propose a comprehensive experimental evaluation based on diverse kinds of data (such as those acquired by laser scanners, Kinect and stereo cameras) as well as on quantitative comparison with respect to other methods. We also address the issue of setting the many parameters that characterize coarse registration pipelines fairly and realistically. The experimental evaluation vouches that our method can handle effectively data acquired by different sensors and is remarkably fast. 相似文献
18.
The main goal of this paper is to show how relatively minor modifications of well-known algorithms (in particular, back propagation) can dramatically increase the performance of an artificial neural network (ANN) for time series prediction. We denote our proposed sets of modifications as the 'self-momentum', 'Freud' and 'Jung' rules. In our opinion, they provide an example of an alternative approach to the design of learning strategies for ANNs, one that focuses on basic mathematical conceptualization rather than on formalism and demonstration. The complexity of actual prediction problems makes it necessary to experiment with modelling possibilities whose inherent mathematical properties are often not well understood yet. The problem of time series prediction in stock markets is a case in point. It is well known that asset price dynamics in financial markets are difficult to trace, let alone to predict with an operationally interesting degree of accuracy. We therefore take financial prediction as a meaningful test bed for the validation of our techniques. We discuss in some detail both the theoretical underpinnings of the technique and our case study about financial prediction, finding encouraging evidence that supports the theoretical and operational viability of our new ANN specifications. Ours is clearly only a preliminary step. Further developments of ANN architectures with more and more sophisticated 'learning to learn' characteristics are now under study and test. 相似文献
19.
When the purpose of a survival regression model is to predict future outcomes, the predictive accuracy of the model needs to be evaluated before practical application. Various measures of predictive accuracy have been proposed for survival data, none of which has been adopted as a standard, and their inclusion in statistical software is disregarded. We developed the surev library for R and S-PLUS, which includes functions for evaluating the predictive accuracy measures proposed by Schemper and Henderson. The library evaluates the predictive accuracy of parametric regression models and of Cox models. The predictive accuracy of the Cox model can be obtained also when time-dependent covariates are included because of non-proportional hazards or when using Bayesian model averaging. The use of the library is illustrated with examples based on a real data set. 相似文献
20.
We analyze generalization in XCSF and introduce three improvements. We begin by showing that the types of generalizations evolved by XCSF can be influenced by the input range. To explain these results we present a theoretical analysis of the convergence of classifier weights in XCSF which highlights a broader issue. In XCSF, because of the mathematical properties of the Widrow-Hoff update, the convergence of classifier weights in a given subspace can be slow when the spread of the eigenvalues of the autocorrelation matrix associated with each classifier is large. As a major consequence, the system's accuracy pressure may act before classifier weights are adequately updated, so that XCSF may evolve piecewise constant approximations, instead of the intended, and more efficient, piecewise linear ones. We propose three different ways to update classifier weights in XCSF so as to increase the generalization capabilities of XCSF: one based on a condition-based normalization of the inputs, one based on linear least squares, and one based on the recursive version of linear least squares. Through a series of experiments we show that while all three approaches significantly improve XCSF, least squares approaches appear to be best performing and most robust. Finally we show how XCSF can be extended to include polynomial approximations. 相似文献