首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2599篇
  免费   164篇
  国内免费   6篇
电工技术   37篇
综合类   1篇
化学工业   676篇
金属工艺   51篇
机械仪表   46篇
建筑科学   68篇
矿业工程   4篇
能源动力   90篇
轻工业   244篇
水利工程   21篇
石油天然气   2篇
无线电   233篇
一般工业技术   543篇
冶金工业   145篇
原子能技术   18篇
自动化技术   590篇
  2024年   5篇
  2023年   39篇
  2022年   83篇
  2021年   114篇
  2020年   85篇
  2019年   77篇
  2018年   80篇
  2017年   84篇
  2016年   109篇
  2015年   94篇
  2014年   123篇
  2013年   170篇
  2012年   204篇
  2011年   229篇
  2010年   171篇
  2009年   149篇
  2008年   153篇
  2007年   127篇
  2006年   128篇
  2005年   79篇
  2004年   73篇
  2003年   61篇
  2002年   42篇
  2001年   25篇
  2000年   20篇
  1999年   23篇
  1998年   45篇
  1997年   33篇
  1996年   16篇
  1995年   14篇
  1994年   15篇
  1993年   18篇
  1992年   5篇
  1991年   15篇
  1990年   4篇
  1989年   3篇
  1988年   3篇
  1987年   7篇
  1986年   4篇
  1985年   5篇
  1984年   4篇
  1983年   5篇
  1982年   5篇
  1981年   5篇
  1979年   6篇
  1975年   3篇
  1972年   1篇
  1971年   1篇
  1967年   1篇
  1931年   1篇
排序方式: 共有2769条查询结果,搜索用时 15 毫秒
81.
This paper proposes a Fuzzy Fault Tolerant Predictive Control (FFTPC) with integral action method for a class of nonlinear systems. The Takagi-Sugeno (T-S) fuzzy approach is introduced as a modelling technique in order to consider the active control methods adapted to linear models. The proposed control strategy is based on a combination between Parallel Distributed Compensation (PDC) control law and Model Predictive Control (MPC) where the T-S fuzzy aspect uses the Unmeasurable Premise Variables (UPV). A T-S fuzzy observer provides an L2 norm estimation of system state vector and faults. The controller and observer gains are obtained by solving Linear Matrix Inequalities (LMIs) derived from the Lyapunov theory. The validity of the proposed Fault Tolerant Control (FTC) strategy is illustrated through an application to a Diesel Engine Air Path (DEAP) system.  相似文献   
82.
83.
The objective of this study was to determine the effect of temperature on whole milk density measured at four different temperatures: 5, 10, 15, and 20 °C. A total of ninety-three individual milk samples were collected from morning milking of thirty-two Holstein Friesian dairy cows, of national average genetic merit, once every two weeks over a period of 4 weeks and were assessed by Fourier transform infrared spectroscopy for milk composition analysis. Density of the milk was evaluated using two different analytical methods: a portable density meter DMA35 and a standard desktop model DMA4500M (Anton Paar GmbH, UK). Milk density was analysed with a linear mixed model with the fixed effects of sampling period, temperature and analysis method; triple interaction of sampling period x analysis method x temperature; and the random effect of cow to account for repeated measures. The effect of temperature on milk density (ρ) was also evaluated including temperature (t) as covariate with linear and quadratic effects within each analytic method. The regression equation describing the curvature and density–temperature relationship for the DMA35 instrument was ρ = 1.0338−0.00017T−0.0000122T2 (R2 = 0.64), while it was ρ = 1.0334 + 0.000057T−0.00001T2 (R2 = 0.61) for DMA4500 instrument. The mean density determined with DMA4500 at 5 °C was 1.0334 g cm−3, with corresponding figures of 1.0330, 1.0320 and 1.0305 g cm−3 at 10, 15 and 20 °C, respectively. The milk density values obtained in this study at specific temperatures will help to address any bias in weight–volume calculations and thus may also improve the financial and operational control for the dairy processors in Ireland and internationally.  相似文献   
84.
85.
Basic algorithms have been proposed in the field of low-power (Yao, F., et al. in Proceedings of lEEE annual foundations of computer science, pp. 374–382, 1995) which compute the minimum energy-schedule for a set of non-recurrent tasks (or jobs) scheduled under EDF on a dynamically variable voltage processor. In this study, we propose improvements upon existing algorithms with lower average and worst-case complexities. They are based on a new EDF feasibility test that helps to identify the “critical intervals”. The complexity of this feasibility test depends on structural characteristics of the set of jobs. More precisely, it depends on how tasks are included one in the other. The first step of the algorithm is to construct the Hasse diagram of the set of tasks where the partial order is defined by the inclusion relation on the tasks. Then, the algorithm constructs the shortest path in a geometrical representation at each level of the Hasse diagram. The optimal processor speed is chosen according to the maximal slope of each path.
Nicolas NavetEmail:
  相似文献   
86.
87.
EASEA is a framework designed to help non-expert programmers to optimize their problems by evolutionary computation. It allows to generate code targeted for standard CPU architectures, GPGPU-equipped machines as well as distributed memory clusters. In this paper, EASEA is presented by its underlying algorithms and by some example problems. Achievable speedups are also shown onto different NVIDIA GPGPUs cards for different optimization algorithm families.  相似文献   
88.
We study here the computation of shallow-water equations with topography by Finite Volume methods, in a one-dimensional framework (though all methods introduced may be naturally extended in two dimensions). All methods are based on a discretisation of the topography by a piecewise function constant on each cell of the mesh, from an original idea of Le Roux et al. Whereas the Well-Balanced scheme of Le Roux is based on the exact resolution of each Riemann problem, we consider here approximate Riemann solvers. Several single step methods are derived from this formalism, and numerical results are compared to a fractional step method. Some test cases are presented: convergence towards steady states in subcritical and supercritical configurations, occurrence of dry area by a drain over a bump and occurrence of vacuum by a double rarefaction wave over a step. Numerical schemes, combined with an appropriate high-order extension, provide accurate and convergent approximations.  相似文献   
89.
90.
We present a computer tool for testing walk hypotheses for human beings. This tool aims to generate plausible walking movements according to anatomical knowledge. To this end, we introduce an interpolation method based, on one hand, on morphological data and, on the other hand, on stance hypotheses and on footprint hypotheses. We want to test these hypotheses for application to the reconstruction of early hominid walking. We interpolate from a specific representation of the movement—a characteristic relative displacement. First, we use a motion capture system to acquire real movements of a walk cycle, and we propose to represent them by using a generic parametric model. Thus, we create a database of movements. The interpolation process produces, thanks to this database, a retargeted motion adapted to the morphology of the considered targeted skeleton. The interpolation is done according to three main hypotheses. The first concerns the reference stance, the second the lateral spacing between the feet, and the third the length of the step. In the introduction, we refer to related work. Then we propose the two following points of our method: the 3D representation of our motion representation and the multidimensional interpolation method applied to this representation. The interpolation method addresses morphological adaptation, and the use of an inverse kinematics solver addresses the computation of skeleton movements. The self-coherent validation process aims to test the coherence of the proposed method. The results propose an application to a virtual skeleton of Lucy (Australopithecus afarensis A.L. 288-1) reconstructed from real data. Finally, the relevance of the method for anthropological investigations and for animation purposes is discussed and future work is discussed with respect to the limitations of the proposed method.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号