首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1875篇
  免费   91篇
  国内免费   5篇
电工技术   24篇
综合类   16篇
化学工业   573篇
金属工艺   31篇
机械仪表   31篇
建筑科学   139篇
矿业工程   3篇
能源动力   27篇
轻工业   170篇
水利工程   11篇
石油天然气   3篇
无线电   117篇
一般工业技术   378篇
冶金工业   84篇
原子能技术   17篇
自动化技术   347篇
  2024年   6篇
  2023年   27篇
  2022年   51篇
  2021年   66篇
  2020年   37篇
  2019年   46篇
  2018年   43篇
  2017年   41篇
  2016年   51篇
  2015年   47篇
  2014年   74篇
  2013年   99篇
  2012年   90篇
  2011年   158篇
  2010年   100篇
  2009年   112篇
  2008年   102篇
  2007年   84篇
  2006年   85篇
  2005年   73篇
  2004年   68篇
  2003年   53篇
  2002年   51篇
  2001年   45篇
  2000年   42篇
  1999年   41篇
  1998年   30篇
  1997年   27篇
  1996年   20篇
  1995年   21篇
  1994年   28篇
  1993年   15篇
  1992年   13篇
  1991年   23篇
  1990年   16篇
  1989年   14篇
  1988年   7篇
  1987年   6篇
  1986年   4篇
  1985年   7篇
  1984年   12篇
  1983年   8篇
  1981年   2篇
  1980年   2篇
  1978年   4篇
  1977年   4篇
  1976年   4篇
  1974年   2篇
  1967年   2篇
  1931年   3篇
排序方式: 共有1971条查询结果,搜索用时 15 毫秒
101.
Most methods of change detection require a considerable amount of effort and expertise. The procedures of change detection are visual-, classification-, object- or vector-based. The target of this research was to develop an automated and generally unsupervised combination of methods to quantify deforestation on a per pixel basis. The study area was the Gutu district in Zimbabwe. In the first step, Landsat Thematic Mapper (TM) scenes were spectrally unmixed by the Spectral Mixture Analysis (SMA). The calculation of the necessary endmembers was performed by means of the N-FINDR algorithm. After the unmixing process, the data were analysed with change vector analysis (CVA) utilizing spherical statistics. Thereafter, a combination of constraints, including a Bayesian threshold and spherical angles, was applied to identify deforestation. The combination of these methods provided an accurate idea of the state of deforestation and enabled attribution to ‘fire-induced’ and ‘non fire-induced’ classes.  相似文献   
102.
A major tsunami in December 2004 devastated the coastal ecosystems along the Andaman Sea coast of Thailand. Since intact coastal ecosystems provide many important services for local communities at the Andaman Sea, it is crucial to investigate to what extent (in terms of percentage area and speed) the affected ecosystems were capable of recovering after the tsunami. Field measurements and multi-date IKONOS imagery were used to estimate the recovery and succession patterns of coastal vegetation types in the Phang-Nga province of Thailand, three years after the tsunami. Thus, this study contributes to a holistic understanding of the ecological vulnerability of the coastal area to tsunamis. A zone-based change detection approach is applied by comparing two change detection techniques: the first method involves the calculation of a recovery rate based on multi-temporal TNDVI (transformed normalized difference vegetation index) images (TNDVI approach), whereas the second approach is a combined approach of the change vector analysis (CVA). Although these two methods provide different types of information (quantitative for the TNDVI approach, qualitative for the CVA), they are comparable in terms of results and accuracies. The results reveal that recovery processes vary based on the type of the ecosystem and, furthermore, are strongly influenced by human activities. Grasslands, coconut plantations and the mixed vegetation cover could recover faster than the mangroves and casuarina forests. Among the forest ecosystems, recovery rates of casuarina forests were higher than for mangroves, but the recovery area was smaller. This study also discusses the potential and some limitations and inaccuracies of applying high-resolution optical imagery for assessing vegetation recovery at a local scale.  相似文献   
103.
Applications in industry often have grown and improved over many years. Since their performance demands increase, they also need to benefit from the availability of multi-core processors. However, a reimplementation from scratch and even a restructuring of these industrial applications is very expensive, often due to high certification efforts. Therefore, a strategy for a systematic parallelization of legacy code is needed. We present a parallelization approach for hard real-time systems, which ensures a high reusage of legacy code and preserves timing analysability. To show its applicability, we apply it on the core algorithm of an avionics application as well as on the control program of a large construction machine. We create models of the legacy programs showing the potential of parallelism, optimize them and change the source codes accordingly. The parallelized applications are placed on a predictable multi-core processor with up to 18 cores. For evaluation, we compare the worst case execution times and their speedups. Furthermore, we analyse limitations coming up at the parallelization process.  相似文献   
104.
105.
Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network.  相似文献   
106.
Geno-mathematical identification of the multi-layer perceptron   总被引:1,自引:0,他引:1  
In this paper, we will focus on the use of the three-layer backpropagation network in vector-valued time series estimation problems. The neural network provides a framework for noncomplex calculations to solve the estimation problem, yet the search for optimal or even feasible neural networks for stochastic processes is both time consuming and uncertain. The backpropagation algorithm—written in strict ANSI C—has been implemented as a standalone support library for the genetic hybrid algorithm (GHA) running on any sequential or parallel main frame computer. In order to cope with ill-conditioned time series problems, we extended the original backpropagation algorithm to a K nearest neighbors algorithm (K-NARX), where the number K is determined genetically along with a set of key parameters. In the K-NARX algorithm, the terminal solution at instant t can be used as a starting point for the next t, which tends to stabilize the optimization process when dealing with autocorrelated time series vectors. This possibility has proved to be especially useful in difficult time series problems. Following the prevailing research directions, we use a genetic algorithm to determine optimal parameterizations for the network, including the lag structure for the nonlinear vector time series system, the net structure with one or two hidden layers and the corresponding number of nodes, type of activation function (currently the standard logistic sigmoid, a bipolar transformation, the hyperbolic tangent, an exponential function and the sine function), the type of minimization algorithm, the number K of nearest neighbors in the K-NARX procedure, the initial value of the Levenberg–Marquardt damping parameter and the value of the neural learning (stabilization) coefficient α. We have focused on a flexible structure allowing addition of, e.g., new minimization algorithms and activation functions in the future. We demonstrate the power of the genetically trimmed K-NARX algorithm on a representative data set.  相似文献   
107.
Software and Systems Modeling - Model-driven engineering (MDE) has proved to be a useful approach to cope with today’s ever-growing complexity in the development of software systems;...  相似文献   
108.
We consider bounds on the prediction error of classification algorithms based on sample compression. We refine the notion of a compression scheme to distinguish permutation and repetition invariant and non-permutation and repetition invariant compression schemes leading to different prediction error bounds. Also, we extend known results on compression to the case of non-zero empirical risk.We provide bounds on the prediction error of classifiers returned by mistake-driven online learning algorithms by interpreting mistake bounds as bounds on the size of the respective compression scheme of the algorithm. This leads to a bound on the prediction error of perceptron solutions that depends on the margin a support vector machine would achieve on the same training sample.Furthermore, using the property of compression we derive bounds on the average prediction error of kernel classifiers in the PAC-Bayesian framework. These bounds assume a prior measure over the expansion coefficients in the data-dependent kernel expansion and bound the average prediction error uniformly over subsets of the space of expansion coefficients.Editor Shai Ben-David  相似文献   
109.
This article presents the approaches taken to integrate a novel anthropomorphic robot hand into a humanoid robot. The requisites enabling such a robot hand to use everyday objects in an environment built for humans are presented. Starting from a design that resembles the human hand regarding size and movability of the mechatronical system, a low-level control system is shown providing reliable and stable controllers for single joint angles and torques, entire fingers and several coordinated fingers. Further on, the high-level control system connecting the low-level control system with the rest of the humanoid robot is presented. It provides grasp skills to the superior robot control system, coordinates movements of hand and arm and determines grasp patterns, depending on the object to grasp and the task to execute. Finally some preliminary results of the system, which is currently tested in simulations, will be presented.  相似文献   
110.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号