首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2519篇
  免费   209篇
  国内免费   14篇
电工技术   49篇
综合类   17篇
化学工业   632篇
金属工艺   62篇
机械仪表   39篇
建筑科学   126篇
矿业工程   6篇
能源动力   132篇
轻工业   103篇
水利工程   12篇
石油天然气   6篇
无线电   251篇
一般工业技术   572篇
冶金工业   117篇
原子能技术   5篇
自动化技术   613篇
  2024年   4篇
  2023年   59篇
  2022年   95篇
  2021年   158篇
  2020年   107篇
  2019年   105篇
  2018年   109篇
  2017年   122篇
  2016年   131篇
  2015年   100篇
  2014年   159篇
  2013年   188篇
  2012年   209篇
  2011年   220篇
  2010年   134篇
  2009年   140篇
  2008年   123篇
  2007年   98篇
  2006年   82篇
  2005年   42篇
  2004年   46篇
  2003年   41篇
  2002年   32篇
  2001年   27篇
  2000年   22篇
  1999年   22篇
  1998年   29篇
  1997年   17篇
  1996年   7篇
  1995年   14篇
  1994年   6篇
  1993年   7篇
  1992年   10篇
  1991年   9篇
  1990年   14篇
  1989年   4篇
  1987年   3篇
  1985年   3篇
  1984年   2篇
  1983年   4篇
  1981年   4篇
  1979年   2篇
  1978年   2篇
  1977年   3篇
  1976年   4篇
  1975年   4篇
  1974年   3篇
  1973年   3篇
  1972年   4篇
  1971年   2篇
排序方式: 共有2742条查询结果,搜索用时 15 毫秒
71.
Given a Feynman parameter integral, depending on a single discrete variable NN and a real parameter εε, we discuss a new algorithmic framework to compute the first coefficients of its Laurent series expansion in εε. In a first step, the integrals are expressed by hypergeometric multi-sums by means of symbolic transformations. Given this sum format, we develop new summation tools to extract the first coefficients of its series expansion whenever they are expressible in terms of indefinite nested product–sum expressions. In particular, we enhance the known multi-sum algorithms to derive recurrences for sums with complicated boundary conditions, and we present new algorithms to find formal Laurent series solutions of a given recurrence relation.  相似文献   
72.
Helicopters are valuable since they can land at unprepared sites; however, current unmanned helicopters are unable to select or validate landing zones (LZs) and approach paths. For operation in unknown terrain it is necessary to assess the safety of a LZ. In this paper, we describe a lidar-based perception system that enables a full-scale autonomous helicopter to identify and land in previously unmapped terrain with no human input.We describe the problem, real-time algorithms, perception hardware, and results. Our approach has extended the state of the art in terrain assessment by incorporating not only plane fitting, but by also considering factors such as terrain/skid interaction, rotor and tail clearance, wind direction, clear approach/abort paths, and ground paths.In results from urban and natural environments we were able to successfully classify LZs from point cloud maps. We also present results from 8 successful landing experiments with varying ground clutter and approach directions. The helicopter selected its own landing site, approaches, and then proceeds to land. To our knowledge, these experiments were the first demonstration of a full-scale autonomous helicopter that selected its own landing zones and landed.  相似文献   
73.
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.  相似文献   
74.
Thrun  Sebastian  Burgard  Wolfram  Fox  Dieter 《Machine Learning》1998,31(1-3):29-53
This paper addresses the problem of building large-scale geometric maps of indoor environments with mobile robots. It poses the map building problem as a constrained, probabilistic maximum-likelihood estimation problem. It then devises a practical algorithm for generating the most likely map from data, along with the most likely path taken by the robot. Experimental results in cyclic environments of size up to 80 by 25 meter illustrate the appropriateness of the approach.  相似文献   
75.
This research is an effort towards providing higher level Design for Environment (DFE) tools for a broad industrial region. Issues ranging from the levels immediately above existing design tools, to the envisioned highest level for a broad geographical region are discussed. A tool for the regional planning of the DFE activities is proposed, considering a model construction based on material flows across the industry. The Multi-Lifecycle approach is supported by organizing the input/output flows for industries, potentially utilizing waste material, side products and recycling. Capitalizing on the conceptual integration of the design and process activities, an Abstract Design Environment is used for the design of the basically process oriented material flow tool. Within the context, the relations among design, process and flow-modeling concepts are discussed.  相似文献   
76.
Bayesian Landmark Learning for Mobile Robot Localization   总被引:10,自引:0,他引:10  
To operate successfully in indoor environments, mobile robots must be able to localize themselves. Most current localization algorithms lack flexibility, autonomy, and often optimality, since they rely on a human to determine what aspects of the sensor data to use in localization (e.g., what landmarks to use). This paper describes a learning algorithm, called BaLL, that enables mobile robots to learn what features/landmarks are best suited for localization, and also to train artificial neural networks for extracting them from the sensor data. A rigorous Bayesian analysis of probabilistic localization is presented, which produces a rational argument for evaluating features, for selecting them optimally, and for training the networks that approximate the optimal solution. In a systematic experimental study, BaLL outperforms two other recent approaches to mobile robot localization.  相似文献   
77.
Transitions between plant species assemblages are often continuous with the form of the transition dependent on the ‘slope’ of environmental gradients and on the style of self-organization in vegetation. Image segmentation can present misleading or even erroneous results if applied to continuous spatial changes in vegetation. Even methods that allow for multiple-class memberships of pixels presuppose the existence of ideal types of species assemblages that constitute mixtures—an assumption that does not fit the case of continua where any section of a gradient is as ‘pure’ as any other section like in modulations of grassland species composition.Thus, we attempted to spatially model floristic gradients in Bavarian meadows by extrapolating axes of an unconstrained ordination of species data. The models were based on high-resolution hyperspectral airborne imagery. We further modelled the distribution of plant functional response types (Ellenberg indicator values) and the cover values of selected species. The models were made with partial least squares (PLS) regression analyses. The realistic utility of the regression models was evaluated by full leave-one-out cross-validation.The modelled floristic gradients showed a considerable agreement with ground-based observations of floristic gradients (R2=0.71 and 0.66 for the first two axes of ordination). Apart from mapping the most important continuous floristic differences, we mapped gradients in the appearance of plant functional response groups as represented by averaged Ellenberg indicator values for soil pH (R2=0.76), water supply (R2=0.66) and nutrient supply (R2=0.75), while models for the cover of single species were weak.Compared to many other vegetation attributes, plant species composition is difficult to detect with remote sensing techniques. This is partly caused by a lack of compatibility between methods of vegetation ecology and remote sensing. We believe that the present study has the potential to increase compatibility as neither spectral nor vegetation information gets lost by a classifying step.  相似文献   
78.
Selecting a Cost-Effective Test Case Prioritization Technique   总被引:1,自引:0,他引:1  
Regression testing is an expensive testing process used to validate modified software and detect whether new faults have been introduced into previously tested code. To reduce the cost of regression testing, software testers may prioritize their test cases so that those which are more important, by some measure, are run earlier in the regression testing process. One goal of prioritization is to increase a test suite's rate of fault detection. Previous empirical studies have shown that several prioritization techniques can significantly improve rate of fault detection, but these studies have also shown that the effectiveness of these techniques varies considerably across various attributes of the program, test suites, and modifications being considered. This variation makes it difficult for a practitioner to choose an appropriate prioritization technique for a given testing scenario. To address this problem, we analyze the fault detection rates that result from applying several different prioritization techniques to several programs and modified versions. The results of our analyses provide insights into which types of prioritization techniques are and are not appropriate under specific testing scenarios, and the conditions under which they are or are not appropriate. Our analysis approach can also be used by other researchers or practitioners to determine the prioritization techniques appropriate to other workloads.  相似文献   
79.
Investigating the dynamical and physical properties of cosmic dust can reveal a great deal of information about both the dust and its many sources. Over recent years, several spacecraft (e.g., Cassini, Stardust, Galileo, and Ulysses) have successfully characterised interstellar, interplanetary, and circumplanetary dust using a variety of techniques, including in situ analyses and sample return. Charge, mass, and velocity measurements of the dust are performed either directly (induced charge signals) or indirectly (mass and velocity from impact ionisation signals or crater morphology) and constrain the dynamical parameters of the dust grains. Dust compositional information may be obtained via either time-of-flight mass spectrometry of the impact plasma or direct sample return. The accurate and reliable interpretation of collected spacecraft data requires a comprehensive programme of terrestrial instrument calibration. This process involves accelerating suitable solar system analogue dust particles to hypervelocity speeds in the laboratory, an activity performed at the Max Planck Institut fu?r Kernphysik in Heidelberg, Germany. Here, a 2 MV Van de Graaff accelerator electrostatically accelerates charged micron and submicron-sized dust particles to speeds up to 80 km s(-1). Recent advances in dust production and processing have allowed solar system analogue dust particles (silicates and other minerals) to be coated with a thin conductive shell, enabling them to be charged and accelerated. Refinements and upgrades to the beam line instrumentation and electronics now allow for the reliable selection of particles at velocities of 1-80 km s(-1) and with diameters of between 0.05 μm and 5 μm. This ability to select particles for subsequent impact studies based on their charges, masses, or velocities is provided by a particle selection unit (PSU). The PSU contains a field programmable gate array, capable of monitoring in real time the particles' speeds and charges, and is controlled remotely by a custom, platform independent, software package. The new control instrumentation and electronics, together with the wide range of accelerable particle types, allow the controlled investigation of hypervelocity impact phenomena across a hitherto unobtainable range of impact parameters.  相似文献   
80.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号