首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5811篇
  免费   237篇
  国内免费   8篇
电工技术   42篇
综合类   7篇
化学工业   1200篇
金属工艺   104篇
机械仪表   117篇
建筑科学   212篇
矿业工程   16篇
能源动力   94篇
轻工业   606篇
水利工程   34篇
石油天然气   4篇
无线电   351篇
一般工业技术   973篇
冶金工业   1559篇
原子能技术   32篇
自动化技术   705篇
  2023年   36篇
  2022年   63篇
  2021年   113篇
  2020年   93篇
  2019年   76篇
  2018年   168篇
  2017年   160篇
  2016年   172篇
  2015年   143篇
  2014年   163篇
  2013年   370篇
  2012年   308篇
  2011年   326篇
  2010年   244篇
  2009年   211篇
  2008年   266篇
  2007年   210篇
  2006年   193篇
  2005年   154篇
  2004年   129篇
  2003年   116篇
  2002年   101篇
  2001年   85篇
  2000年   99篇
  1999年   88篇
  1998年   525篇
  1997年   296篇
  1996年   198篇
  1995年   112篇
  1994年   112篇
  1993年   108篇
  1992年   23篇
  1991年   42篇
  1990年   25篇
  1989年   41篇
  1988年   34篇
  1987年   30篇
  1986年   18篇
  1985年   35篇
  1984年   24篇
  1983年   22篇
  1982年   35篇
  1981年   35篇
  1980年   22篇
  1979年   17篇
  1978年   21篇
  1977年   44篇
  1976年   46篇
  1975年   18篇
  1973年   12篇
排序方式: 共有6056条查询结果,搜索用时 0 毫秒
71.
FLUOROPOLYMERS(FPs)are thermoplasticmaterials which exhibit a number of unique chemicaland physical characteristics that no other man-madeplastic product gets together.For example,FPs showvery good resistance to almost all chemicals,excellentwater vapor barrier properties,high thermal stability,outstanding electrical insulation properties(lowdielectric constant and dissipation factors),extremelylow frictional coefficient giving them high auto-lubrication properties,high resistance to rad…  相似文献   
72.
In numerous industrial CFD applications, it is usual to use two (or more) different codes to solve a physical phenomenon: where the flow is a priori assumed to have a simple behavior, a code based on a coarse model is applied, while a code based on a fine model is used elsewhere. This leads to a complex coupling problem with fixed interfaces. The aim of the present work is to provide a numerical indicator to optimize to position of these coupling interfaces. In other words, thanks to this numerical indicator, one could verify if the use of the coarser model and of the resulting coupling does not introduce spurious effects. In order to validate this indicator, we use it in a dynamical multiscale method with moving coupling interfaces. The principle of this method is to use as much as possible a coarse model instead of the fine model in the computational domain, in order to obtain an accuracy which is comparable with the one provided by the fine model. We focus here on general hyperbolic systems with stiff relaxation source terms together with the corresponding hyperbolic equilibrium systems. Using a numerical Chapman–Enskog expansion and the distance to the equilibrium manifold, we construct the numerical indicator. Based on several works on the coupling of different hyperbolic models, an original numerical method of dynamic model adaptation is proposed. We prove that this multiscale method preserves invariant domains and that the entropy of the numerical solution decreases with respect to time. The reliability of the adaptation procedure is assessed on various 1D and 2D test cases coming from two-phase flow modeling.  相似文献   
73.
In this paper, we propose a metaheuristic for solving an original scheduling problem with auxiliary resources in a photolithography workshop of a semiconductor plant. The photolithography workshop is often a bottleneck, and improving scheduling decisions in this workshop can help to improve indicators of the whole plant. Two optimization criteria are separately considered: the weighted flow time (to minimize) and the number of products that are processed (to maximize). After stating the problem and giving some properties on the solution space, we show how these properties help us to efficiently solve the problem with the proposed memetic algorithm, which has been implemented and tested on large generated instances. Numerical experiments show that good solutions are obtained within a reasonable computational time.  相似文献   
74.
Some supervised tasks are presented with a numerical output but decisions have to be made in a discrete, binarised, way, according to a particular cutoff. This binarised regression task is a very common situation that requires its own analysis, different from regression and classification—and ordinal regression. We first investigate the application cases in terms of the information about the distribution and range of the cutoffs and distinguish six possible scenarios, some of which are more common than others. Next, we study two basic approaches: the retraining approach, which discretises the training set whenever the cutoff is available and learns a new classifier from it, and the reframing approach, which learns a regression model and sets the cutoff when this is available during deployment. In order to assess the binarised regression task, we introduce context plots featuring error against cutoff. Two special cases are of interest, the \( UCE \) and \( OCE \) curves, showing that the area under the former is the mean absolute error and the latter is a new metric that is in between a ranking measure and a residual-based measure. A comprehensive evaluation of the retraining and reframing approaches is performed using a repository of binarised regression problems created on purpose, concluding that no method is clearly better than the other, except when the size of the training data is small.  相似文献   
75.
The efficient application of current methods of shadow detection in video is hindered by the difficulty in defining their parameters or models and/or their application domain dependence. This paper presents a new shadow detection and removal method that aims to overcome these inefficiencies. It proposes a semi-supervised learning rule using a new variant of co-training technique for shadow detection and removal in uncontrolled scenes. The new variant both reduces the run-time through a periodical execution of a co-training process according to a novel temporal framework, and generates a more generic prediction model for an accurate classification. The efficiency of the proposed method is shown experimentally on a testbed of videos that were recorded by a static camera and that included several constraints, e.g., dynamic changes in the natural scene and various visual shadow features. The conducted experimental study produced quantitative and qualitative results that highlighted the robustness of our shadow detection method and its accuracy in removing cast shadows. In addition, the practical usefulness of the proposed method was evaluated by integrating it in a Highway Control and Management System software called RoadGuard.  相似文献   
76.
Increasing numbers of hard environmental constraints are being imposed in urban traffic networks by authorities in an attempt to mitigate pollution caused by traffic. However, it is not trivial for authorities to assess the cost of imposing such hard environmental constraints. This leads to difficulties when setting the constraining values as well as implementing effective control measures. For that reason, quantifying the cost of imposing hard environmental constraints for a certain network becomes crucial. This paper first indicates that for a given network, such cost is not only related to the attribution of environmental constraints but also related to the considered control measures. Next, we present an assessment criterion that quantifies the loss of optimality under the control measures considered by introducing the environmental constraints. The criterion can be acquired by solving a bi-level programming problem with/without environmental constraints. A simple case study shows its practicability as well as the differences between this framework and other frameworks integrating the environmental aspects. This proposed framework is widely applicable when assessing the interaction of traffic and its environmental aspects.  相似文献   
77.
In this paper we introduce a new variant of shape differentiation which is adapted to the deformation of shapes along their normal direction. This is typically the case in the level-set method for shape optimization where the shape evolves with a normal velocity. As all other variants of the original Hadamard method of shape differentiation, our approach yields the same first order derivative. However, the Hessian or second-order derivative is different and somehow simpler since only normal movements are allowed. The applications of this new Hessian formula are twofold. First, it leads to a novel extension method for the normal velocity, used in the Hamilton-Jacobi equation of front propagation. Second, as could be expected, it is at the basis of a Newton optimization algorithm which is conceptually simpler since no tangential displacements have to be considered. Numerical examples are given to illustrate the potentiality of these two applications. The key technical tool for our approach is the method of bicharacteristics for solving Hamilton-Jacobi equations. Our new idea is to differentiate the shape along these bicharacteristics (a system of two ordinary differential equations).  相似文献   
78.
79.
A quartz crystal viscometer has been developed for measuring viscosity in liquids under pressure. It employs an AT-cut quartz crystal resonator of fundamental frequency 3 MHz inserted in a variable-volume vessel designed for working up to 80 MPa. Viscosity is determined by two methods from resonance frequency and bandwidth measurements along up to eight different overtones. The resonance frequency allows an absolute measurement of the viscosity but leads to an accuracy limited to 5% whereas the bandwidth technique which works in a relative way provides an accuracy of 2%. The techniques were tested by carrying out measurements in two pure compounds: heptane and toluene. Measurement results demonstrate the feasibility of the technique in this viscosity range. The apparatus was also used to determine the viscosity of n-decane with dissolved methane. The results obtained with these mixtures reveal the applicability of the apparatus for reservoir fluids study.  相似文献   
80.
In the density classification problem, a binary cellular automaton (CA) should decide whether an initial configuration contains more 0s or more 1s. The answer is given when all cells of the CA agree on a given state. This problem is known for having no exact solution in the case of binary deterministic one-dimensional CA. We investigate how randomness in CA may help us solve the problem. We analyse the behaviour of stochastic CA rules that perform the density classification task. We show that describing stochastic rules as a “blend” of deterministic rules allows us to derive quantitative results on the classification time and the classification time of previously studied rules. We introduce a new rule whose effect is to spread defects and to wash them out. This stochastic rule solves the problem with an arbitrary precision, that is, its quality of classification can be made arbitrarily high, though at the price of an increase of the convergence time. We experimentally demonstrate that this rule exhibits good scaling properties and that it attains qualities of classification never reached so far.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号