首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1361篇
  免费   109篇
  国内免费   3篇
电工技术   17篇
化学工业   335篇
金属工艺   26篇
机械仪表   43篇
建筑科学   36篇
矿业工程   1篇
能源动力   77篇
轻工业   173篇
水利工程   10篇
石油天然气   5篇
无线电   100篇
一般工业技术   239篇
冶金工业   127篇
原子能技术   13篇
自动化技术   271篇
  2023年   20篇
  2022年   23篇
  2021年   81篇
  2020年   43篇
  2019年   67篇
  2018年   53篇
  2017年   57篇
  2016年   63篇
  2015年   49篇
  2014年   56篇
  2013年   88篇
  2012年   98篇
  2011年   113篇
  2010年   84篇
  2009年   86篇
  2008年   65篇
  2007年   72篇
  2006年   30篇
  2005年   34篇
  2004年   30篇
  2003年   17篇
  2002年   19篇
  2001年   12篇
  2000年   9篇
  1999年   19篇
  1998年   45篇
  1997年   24篇
  1996年   28篇
  1995年   8篇
  1994年   12篇
  1993年   11篇
  1992年   7篇
  1991年   4篇
  1990年   4篇
  1989年   4篇
  1987年   5篇
  1986年   2篇
  1985年   2篇
  1984年   4篇
  1982年   4篇
  1981年   3篇
  1979年   2篇
  1977年   1篇
  1976年   2篇
  1975年   1篇
  1973年   2篇
  1972年   1篇
  1971年   2篇
  1968年   2篇
  1957年   1篇
排序方式: 共有1473条查询结果,搜索用时 15 毫秒
51.
Estimation of predictive accuracy in survival analysis using R and S-PLUS   总被引:1,自引:0,他引:1  
When the purpose of a survival regression model is to predict future outcomes, the predictive accuracy of the model needs to be evaluated before practical application. Various measures of predictive accuracy have been proposed for survival data, none of which has been adopted as a standard, and their inclusion in statistical software is disregarded. We developed the surev library for R and S-PLUS, which includes functions for evaluating the predictive accuracy measures proposed by Schemper and Henderson. The library evaluates the predictive accuracy of parametric regression models and of Cox models. The predictive accuracy of the Cox model can be obtained also when time-dependent covariates are included because of non-proportional hazards or when using Bayesian model averaging. The use of the library is illustrated with examples based on a real data set.  相似文献   
52.
The novel concept of equivalent state randomly oriented flaws developed from the generalized fracture toughness theory [1] is presented. Based on this concept, planar defects located in multiaxial stress field regions, characterized by modes I, II, and III stress intensity factor combinations, are distinguished by a mode I equivalent state stress intensity factor % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1 of identical function. Accordingly, the complex mode fracture criterion is exactly replaced by the conventional mode I criterion % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1 K 1C . It is demonstrated that this criterion is mathematically equivalent to other more complex generalized fracture criteria [2,4,5], i.e., it predicts the same critical conditions.Current approximate procedures applied to crack-like defects detected in structural components, based on reorienting or orthogonally projecting the defect over a plane normal to the maximum principal tensile stress, are discussed and applied to two simple structural applications. When the results are compared with those from the proposed equivalent state flaw method, it is concluded that, to a large extent, the procedures are inconsistent and generate significant errors that may lead to incorrect decisions over the remaining service life of the structure.The equivalent state flaw concept is used to establish the equivalent state mode I threshold value % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1 corresponding to complex stress state fatigue loadings.
Résumé On présente le concept original de défauts équivalents répartis au hasard, développé à partir de la théorie généralisée sur la ténacité à la rupture [Réf. 1]. Sur base de ce concept, des défauts plans situés dans des zones à champs de contrainte multi-axiale, et caractérisés par des facteurs d'intensité de contrainte combinant les modes I, II et III, sont caractérisés par un facteur d'intensité de contrainte équivalent % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1; relatif à un mode I et occupant la même fonction. Dès lors, le critère décrivant la rupture sous un mode complexe est en tous points remplacé par le critère conventionnel % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1 K 1C .Ce critère est mathématiquement equivalent aux autres critères généralisés de rupture, de forme plus complexe [Réf. 2, 4, 5], en ce qu'il prédit les mêmes conditions critiques.On discute, et on applique à deux cas de structures simples, les procédures habituelles d'approximation pour des défauts assimilables à des fissures détectés dans des composants. Ces défauts sont réorientés ou projetés orthogonalement sur un plan normal à la plus grande tension principale.Lorsqu'on compare les résultats de ces procédures d'approximation à ceux que fournit la méthode proposée, on en conclut que ces procédures sont, dans une large mesure, incorrectes, et qu'elles donnent lieu à des erreurs importantes susceptibles de conduire à des décisions erronées sur la vie résiduelle d'une structure.Le concept de défaut équivalent est utilisé pour établir une valeur critique équivalente % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1 en mode I, correspondant à le seuil des sollicitations de fatigue de mode complexe.


Operated for the U.S. Department of Energy, Contract No. DE-AC12-76-N0052.  相似文献   
53.
The adipose fin is small, nonpared, and usually located medially between the dorsal and caudal fin. Its taxonomic occurrence is very restrict; thus, it represents an important trace for taxon distinction. As it does not play a known vital physiological roll and it is easily removed, it is commonly used in marking and recapture studies. The present study characterizes the adipose fin of Prochilodus lineatus, as it is poorly explored by the literature. The adipose fin consists basically of a loose connective core, covered by a stratified epithelium supported by collagen fibers. At the epithelium, pigmented cells and alarm substance cells are found. Despite the name, adipocytes or lipid droplets are not observed on the structure of the fin.  相似文献   
54.
We propose a general framework to incorporate first-order logic (FOL) clauses, that are thought of as an abstract and partial representation of the environment, into kernel machines that learn within a semi-supervised scheme. We rely on a multi-task learning scheme where each task is associated with a unary predicate defined on the feature space, while higher level abstract representations consist of FOL clauses made of those predicates. We re-use the kernel machine mathematical apparatus to solve the problem as primal optimization of a function composed of the loss on the supervised examples, the regularization term, and a penalty term deriving from forcing real-valued constraints deriving from the predicates. Unlike for classic kernel machines, however, depending on the logic clauses, the overall function to be optimized is not convex anymore. An important contribution is to show that while tackling the optimization by classic numerical schemes is likely to be hopeless, a stage-based learning scheme, in which we start learning the supervised examples until convergence is reached, and then continue by forcing the logic clauses is a viable direction to attack the problem. Some promising experimental results are given on artificial learning tasks and on the automatic tagging of bibtex entries to emphasize the comparison with plain kernel machines.  相似文献   
55.
A number of techniques that infer finite state automata from execution traces have been used to support test and analysis activities. Some of these techniques can produce automata that integrate information about the data-flow, that is, they also represent how data values affect the operations executed by programs.The integration of information about operation sequences and data values into a unique model is indeed conceptually useful to accurately represent the behavior of a program. However, it is still unclear whether handling heterogeneous types of information, such as operation sequences and data values, necessarily produces higher quality models or not.In this paper, we present an empirical comparative study between techniques that infer simple automata and techniques that infer automata extended with information about data-flow. We investigate the effectiveness of these techniques when applied to traces with different levels of sparseness, produced by different software systems. To the best of our knowledge this is the first work that quantifies both the effect of adding data-flow information within automata and the effectiveness of the techniques when varying sparseness of traces.  相似文献   
56.
Mechanisms to control concurrent access over project artefacts are needed to execute the software development process in an organized way. These mechanisms are implemented by concurrency control policies in version control systems that may inhibit (i.e. ‘to lock’) or allow (i.e., ‘not to lock’) parallel development. This work presents a novel approach named Orion that analyzes the project's historical changes and suggests the most appropriate concurrency control policy for each software element. This suggestion aims at minimizing conflict situations and thus improving the productivity of the development team. In addition, it identifies critical elements that do not work well with any of these policies and are candidates to refactoring. We evaluated Orion through two experimental studies and the results, which indicated it was effective, led us to a prototype implementation. Apart from the Orion approach this paper also presents the planning, execution, and analysis stages of the evaluation, and details of prototype internals.  相似文献   
57.
This paper shows two examples of how the analysis of option pricing problems can lead to computational methods efficiently implemented in parallel. These computational methods outperform ??general purpose?? methods (i.e., for example, Monte Carlo, finite differences methods). The GPU implementation of two numerical algorithms to price two specific derivatives (continuous barrier options and realized variance options) is presented. These algorithms are implemented in CUDA subroutines ready to run on Graphics Processing Units (GPUs) and their performance is studied. The realization of these subroutines is motivated by the extensive use of the derivatives considered in the financial markets to hedge or to take risk and by the interest of financial institutions in the use of state of the art hardware and software to speed up the decision process. The performance of these algorithms is measured using the (CPU/GPU) speed up factor, that is using the ratio between the (wall clock) times required to execute the code on a CPU and on a GPU. The choice of the reference CPU and GPU used to evaluate the speed up factors presented is stated. The outstanding performance of the algorithms developed is due to the mathematical properties of the pricing formulae used and to the ad hoc software implementation. In the case of realized variance options when the computation is done in single precision the comparisons between CPU and GPU execution times gives speed up factors of the order of a few hundreds. For barrier options, the corresponding speed up factors are of about fifteen, twenty. The CUDA subroutines to price barrier options and realized variance options can be downloaded from the website http://www.econ.univpm.it/recchioni/finance/w13. A?more general reference to the work in mathematical finance of some of the authors and of their coauthors is the website http://www.econ.univpm.it/recchioni/finance/.  相似文献   
58.
The novel concept of generalized fracture toughness characterization of brittle materials subjected to multiaxial loadings is presented. The theory emphasizes the fracture process as the result of the opening action of the crack surfaces. The generalized fracture toughness values describing the failure events due to combined loading systems lie on a Fracture Envelope characteristic for a given material. The Cartesian equation of the Envelope in the K 1 K 2plane is specified by the conventional fracture toughness K 1cand Poisson's ratio . A Griffith-type fracture criterion permits the prediction of crack propagation onset and crack growth direction.
Résumé L'article présente le nouveau concept de la force de rupture généralisée qui caractérise les matériaux fragiles sujets à sollicitations multiaxiales. La théorie met en évidence que le phénomène de rupture est le résultat de l'action de déplacement symétrique des surfaces de la fissure. Le lieu des valeurs de la force de rupture généralisée qui décrivent les événements de rupture dûs à conditions de sollicitations combinées est l'Enveloppe de Rupture caracteristique pour un matériel particulier. L'équation Cartésienne de l'Enveloppe dans le plan K 1 K 2est specifiée par la force de rupture conventionelle K 1cet par le rapport de Poisson . Un critère de rupture du type Griffith permet la determination de l'amorçage et la direction d'accroissement de la fissure.


Operated for United States Department of Energy, Contract DE-AC-12-65SN00052.  相似文献   
59.
60.
Genetic programming researchers have shown a growing interest in the study of gene regulatory networks in the last few years. Our team has also contributed to the field, by defining two systems for the automatic reverse engineering of gene regulatory networks called GRNGen and GeNet. In this paper, we revise this work by describing in detail the two approaches and empirically comparing them. The results we report, and in particular the fact that GeNet can be used on large networks while GRNGen cannot, encourage us to pursue the study of GeNet in the future. We conclude the paper by discussing the main research directions that we are planning to investigate to improve GeNet.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号