首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1765篇
  免费   80篇
  国内免费   2篇
电工技术   103篇
化学工业   559篇
金属工艺   13篇
机械仪表   41篇
建筑科学   55篇
能源动力   54篇
轻工业   197篇
水利工程   8篇
石油天然气   2篇
武器工业   1篇
无线电   134篇
一般工业技术   241篇
冶金工业   51篇
原子能技术   20篇
自动化技术   368篇
  2023年   11篇
  2022年   98篇
  2021年   87篇
  2020年   39篇
  2019年   43篇
  2018年   51篇
  2017年   29篇
  2016年   51篇
  2015年   45篇
  2014年   71篇
  2013年   115篇
  2012年   108篇
  2011年   103篇
  2010年   95篇
  2009年   78篇
  2008年   73篇
  2007年   93篇
  2006年   76篇
  2005年   67篇
  2004年   57篇
  2003年   60篇
  2002年   46篇
  2001年   29篇
  2000年   34篇
  1999年   31篇
  1998年   33篇
  1997年   31篇
  1996年   28篇
  1995年   21篇
  1994年   24篇
  1993年   14篇
  1992年   10篇
  1991年   5篇
  1990年   6篇
  1989年   5篇
  1988年   4篇
  1987年   6篇
  1986年   4篇
  1984年   10篇
  1983年   6篇
  1982年   6篇
  1980年   6篇
  1979年   4篇
  1978年   7篇
  1977年   4篇
  1973年   2篇
  1971年   3篇
  1968年   3篇
  1967年   2篇
  1966年   2篇
排序方式: 共有1847条查询结果,搜索用时 15 毫秒
41.
Many time-critical applications require predictable performance in the presence of failures. This paper considers a distributed system with independent periodic tasks which can checkpoint their state on some reliable medium in order to handle failures. The problem of preemptively scheduling a set of such tasks is discussed where every occurrence of a task has to be completely executed before the next occurrence of the same task can start. Efficient scheduling algorithms are proposed which yield sub-optimal schedules when there is provision for fault-tolerance. The performance of the solutions proposed is evaluated in terms of the number of processors and the cost of the checkpoints needed. Moreover, analytical studies are used to reveal interesting trade-offs associated with the scheduling algorithms.This work has been supported by grants from the Italian Ministero dell'Università e della Ricerca Scientifica e Tecnologica and the Consiglio Nazionale delle Ricerche-Progetto Finalizzato Sistemi Informatici e Calcolo Parallelo.  相似文献   
42.
In the information age, the storage and accessibility of data is of vital importance. There are several possibilities to fulfill this task. Magnetic storage of data is a well‐established method and the range of materials used is continuously extended. In this study, the magnetic remanence of thermally sprayed tungsten carbide–cobalt (WCCo)‐coatings in dependence of their thickness is examined. Two magnetic fields differing in value and geometry are imprinted into the coatings and the resulting remanence field is measured. It is found that there are two effects, which in combination determine the effective value of the magnetic remanence usable for magnetic data storage.
  相似文献   
43.
This article proposes an optimization–simulation model for planning the transport of supplies to large public infrastructure works located in congested urban areas. The purpose is to minimize their impact on the environment and on private transportation users on the local road network. To achieve this goal, the authors propose and solve an optimization problem for minimizing the total system cost made up of operating costs for various alternatives for taking supplies to the worksite and the costs supported by private vehicle users as a result of increased congestion due to the movement of heavy goods vehicles transporting material to the worksite. The proposed optimization problem is a bi-level Math Program model. The upper level defines the total cost of the system, which is minimized taking into account environmental constraints on atmospheric and noise pollution. The lower level defines the optimization problem representing the private transportation user behavior, assuming they choose the route that minimizes their total individual journey costs. Given the special characteristics of the problem, a heuristic algorithm is proposed for finding optimum solutions. Both the model developed and the specific solution algorithm are applied to the real case of building a new port at Laredo (Northern Spain). A series of interesting conclusions are obtained from the corresponding sensitivity analysis.  相似文献   
44.

Background

COSMIC Function Points and traditional Function Points (i.e., IFPUG Function Points and more recent variation of Function Points, such as NESMA and FISMA) are probably the best known and most widely used Functional Size Measurement methods. The relationship between the two kinds of Function Points still needs to be investigated. If traditional Function Points could be accurately converted into COSMIC Function Points and vice versa, then, by measuring one kind of Function Points, one would be able to obtain the other kind of Function Points, and one might measure one or the other kind interchangeably. Several studies have been performed to evaluate whether a correlation or a conversion function between the two measures exists. Specifically, it has been suggested that the relationship between traditional Function Points and COSMIC Function Points may not be linear, i.e., the value of COSMIC Function Points seems to increase more than proportionally to an increase of traditional Function Points.

Objective

This paper aims at verifying this hypothesis using available datasets that collect both FP and CFP size measures.

Method

Rigorous statistical analysis techniques are used, specifically Piecewise Linear Regression, whose applicability conditions are systematically checked. The Piecewise Linear Regression curve is a series of interconnected segments. In this paper, we focused on Piecewise Linear Regression curves composed of two segments. We also used Linear and Parabolic Regressions, to check if and to what extent Piecewise Linear Regression may provide an advantage over other regression techniques. We used two categories of regression techniques: Ordinary Least Squares regression is based on the usual minimization of the sum of squares of the residuals, or, equivalently, on the minimization of the average squared residual; Least Median of Squares regression is a robust regression technique that is based on the minimization of the median squared residual. Using a robust regression technique helps filter out the excessive influence of outliers.

Results

It appears that the analysis of the relationship between traditional Function Points and COSMIC Function Points based on the aforementioned data analysis techniques yields valid significant models. However, different results for the various available datasets are achieved. In practice, we obtained statistically valid linear, piecewise linear, and non-linear conversion formulas for several datasets. In general, none of these is better than the others in a statistically significant manner.

Conclusions

Practitioners interested in the conversion of FP measures into CFP (or vice versa) cannot just pick a conversion model and be sure that it will yield the best results. All the regression models we tested provide good results with some datasets. In practice, all the models described in the paper - in particular, both linear and non-linear ones - should be evaluated in order to identify the ones that are best suited for the specific dataset at hand.  相似文献   
45.
Air pollution has a negative impact on human health. For this reason, it is important to correctly forecast over-threshold events to give timely warnings to the population. Nonlinear models of the nonlinear autoregressive with exogenous variable (NARX) class have been extensively used to forecast air pollution time series, mainly using artificial neural networks (NNs) to model the nonlinearities. This work discusses the possible advantages of using polynomial NARX instead, in combination with suitable model structure selection methods. Furthermore, a suitably weighted mean square error (MSE) (one-step-ahead prediction) cost function is used in the identification/learning process to enhance the model performance in peak estimation, which is the final purpose of this application. The proposed approach is applied to ground-level ozone concentration time series. An extended simulation analysis is provided to compare the two classes of models on a selected case study (Milan metropolitan area) and to investigate the effect of different weighting functions in the identification performance index. Results show that polynomial NARX are able to correctly reconstruct ozone concentrations, with performances similar to NN-based NARX models, but providing additional information, as, e.g., the best set of regressors to describe the studied phenomena. The simulation analysis also demonstrates the potential benefits of using the weighted cost function, especially in increasing the reliability in peak estimation.  相似文献   
46.
Inspired by recent work on robust and fast computation of 3D Local Reference Frames (LRFs), we propose a novel pipeline for coarse registration of 3D point clouds. Key to the method are: (i) the observation that any two corresponding points endowed with an LRF provide a hypothesis on the rigid motion between two views, (ii) the intuition that feature points can be matched based solely on cues directly derived from the computation of the LRF, (iii) a feature detection approach relying on a saliency criterion which captures the ability to establish an LRF repeatably. Unlike related work in literature, we also propose a comprehensive experimental evaluation based on diverse kinds of data (such as those acquired by laser scanners, Kinect and stereo cameras) as well as on quantitative comparison with respect to other methods. We also address the issue of setting the many parameters that characterize coarse registration pipelines fairly and realistically. The experimental evaluation vouches that our method can handle effectively data acquired by different sensors and is remarkably fast.  相似文献   
47.
The main goal of this paper is to show how relatively minor modifications of well-known algorithms (in particular, back propagation) can dramatically increase the performance of an artificial neural network (ANN) for time series prediction. We denote our proposed sets of modifications as the 'self-momentum', 'Freud' and 'Jung' rules. In our opinion, they provide an example of an alternative approach to the design of learning strategies for ANNs, one that focuses on basic mathematical conceptualization rather than on formalism and demonstration. The complexity of actual prediction problems makes it necessary to experiment with modelling possibilities whose inherent mathematical properties are often not well understood yet. The problem of time series prediction in stock markets is a case in point. It is well known that asset price dynamics in financial markets are difficult to trace, let alone to predict with an operationally interesting degree of accuracy. We therefore take financial prediction as a meaningful test bed for the validation of our techniques. We discuss in some detail both the theoretical underpinnings of the technique and our case study about financial prediction, finding encouraging evidence that supports the theoretical and operational viability of our new ANN specifications. Ours is clearly only a preliminary step. Further developments of ANN architectures with more and more sophisticated 'learning to learn' characteristics are now under study and test.  相似文献   
48.
This review regards the recently developed ionization source named surface-activated chemical ionization (SACI) that employs an interaction with a surface placed at low voltage for the activation of the ionization of sample molecules to increase the sensitivity in the analysis of various compounds of biological and clinical interest. These results are due to the strong chemical noise decrease and the increase of ionization efficiency. This ionization source has been employed for the analysis of various compounds of different molecular mass and polarity (addicted and pharmaceutical drugs, amino acids, steroids, peptides, and proteins). The SACI development theoretical mechanism, benefits, disadvantages, applications, and future developments are reported and discussed.  相似文献   
49.
Estimation of predictive accuracy in survival analysis using R and S-PLUS   总被引:1,自引:0,他引:1  
When the purpose of a survival regression model is to predict future outcomes, the predictive accuracy of the model needs to be evaluated before practical application. Various measures of predictive accuracy have been proposed for survival data, none of which has been adopted as a standard, and their inclusion in statistical software is disregarded. We developed the surev library for R and S-PLUS, which includes functions for evaluating the predictive accuracy measures proposed by Schemper and Henderson. The library evaluates the predictive accuracy of parametric regression models and of Cox models. The predictive accuracy of the Cox model can be obtained also when time-dependent covariates are included because of non-proportional hazards or when using Bayesian model averaging. The use of the library is illustrated with examples based on a real data set.  相似文献   
50.
In the theory of graph rewriting, the use of coalescing rules, i.e., of rules which besides deleting and generating graph items, can coalesce some parts of the graph, turns out to be quite useful for modelling purposes, but, at the same time, problematic for the development of a satisfactory partial order concurrent semantics for rewrites. Rewriting over graphs with equivalences, i.e., (typed hyper)-graphs equipped with an equivalence over nodes provides a technically convenient replacement of graph rewriting with coalescing rules, for which a truly concurrent semantics can be easily defined. The expressivity of such a formalism is tested in a setting where coalescing rules typically play a basic role: the encoding of calculi with name passing as graph rewriting systems. Specifically, we show how the (monadic fragment) of the solo calculus, one of the dialect of those calculi whose distinctive feature is name fusion, can be encoded as a rewriting system over graph with equivalences.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号