首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5803篇
  免费   244篇
  国内免费   31篇
电工技术   52篇
综合类   33篇
化学工业   1120篇
金属工艺   101篇
机械仪表   129篇
建筑科学   241篇
矿业工程   35篇
能源动力   96篇
轻工业   587篇
水利工程   39篇
石油天然气   5篇
武器工业   1篇
无线电   361篇
一般工业技术   963篇
冶金工业   1563篇
原子能技术   27篇
自动化技术   725篇
  2023年   37篇
  2022年   50篇
  2021年   109篇
  2020年   90篇
  2019年   74篇
  2018年   177篇
  2017年   163篇
  2016年   178篇
  2015年   153篇
  2014年   148篇
  2013年   372篇
  2012年   317篇
  2011年   302篇
  2010年   230篇
  2009年   192篇
  2008年   263篇
  2007年   209篇
  2006年   186篇
  2005年   170篇
  2004年   131篇
  2003年   157篇
  2002年   133篇
  2001年   100篇
  2000年   99篇
  1999年   96篇
  1998年   517篇
  1997年   286篇
  1996年   188篇
  1995年   116篇
  1994年   98篇
  1993年   108篇
  1992年   24篇
  1991年   40篇
  1990年   28篇
  1989年   33篇
  1988年   28篇
  1987年   26篇
  1986年   20篇
  1985年   40篇
  1984年   26篇
  1983年   28篇
  1982年   36篇
  1981年   36篇
  1980年   21篇
  1979年   21篇
  1978年   24篇
  1977年   46篇
  1976年   45篇
  1975年   18篇
  1973年   18篇
排序方式: 共有6078条查询结果,搜索用时 0 毫秒
41.
A new crystalline structure of poly 4-methylpentene-1 (P4MP1), modiification named modification V, is obtained from cyclopentane solutions and gels, for polymer volume fractions between 0.01 and 0.10. The effect of the thermal history imparted to the solution is analyzed. The relation between gelation, polymorphism and existence of helical conformations of P4MP1 in solution is discussed. Modification V is tentatively indexed on the basis of an hexagonal unit cell with dimensions a = 22.17 ± 0.14 Å and c = 6.69 ±0.02 Å. The crystal transforms into modification I at 130 ± 5°C, the heat of transition being + 15 ±2 J.g?1.  相似文献   
42.
Information diffusion in large-scale networks has been studied to identify the users influence. The influence has been targeted as a key feature either to reach large populations or influencing public opinion. Through the use of micro-blogs, such as Twitter, global influencers have been identified and ranked based on message propagation (retweets). In this paper, a new application is presented, which allows to find first and classify then the local influence on Twitter: who have influenced you and who have been influenced by you. Until now, social structures of tweets’ original authors that have been either retweeted or marked as favourites are unobservable. Throughout this application, these structures can be discovered and they reveal the existence of communities formed by users of similar profile (that are connected among them) interrelated with other similar profile users’ communities.  相似文献   
43.
Deactivation of metal catalysts in liquid phase organic reactions   总被引:4,自引:0,他引:4  
The paper gives a general survey of the factors contributing to the deactivation of metal catalysts employed in liquid phase reactions for the synthesis of fine or intermediate chemicals. The main causes of catalyst deactivation are particle sintering, metal and support leaching, deposition of inactive metal layers or polymeric species, and poisoning by strongly adsorbed species. Weakly adsorbed species, poisons at low surface coverage and solvents, may act as selectivity promoters or modifiers. Three examples of long term stability studies carried out in trickle-bed reactor (glucose to sorbitol hydrogenation on Ru/C catalysts, hydroxypropanal to 1,3-propanediol hydrogenation on Ru/TiO2 catalysts, and wet air oxidation of paper pulp effluents on Ru/TiO2) are discussed.  相似文献   
44.
In numerous industrial CFD applications, it is usual to use two (or more) different codes to solve a physical phenomenon: where the flow is a priori assumed to have a simple behavior, a code based on a coarse model is applied, while a code based on a fine model is used elsewhere. This leads to a complex coupling problem with fixed interfaces. The aim of the present work is to provide a numerical indicator to optimize to position of these coupling interfaces. In other words, thanks to this numerical indicator, one could verify if the use of the coarser model and of the resulting coupling does not introduce spurious effects. In order to validate this indicator, we use it in a dynamical multiscale method with moving coupling interfaces. The principle of this method is to use as much as possible a coarse model instead of the fine model in the computational domain, in order to obtain an accuracy which is comparable with the one provided by the fine model. We focus here on general hyperbolic systems with stiff relaxation source terms together with the corresponding hyperbolic equilibrium systems. Using a numerical Chapman–Enskog expansion and the distance to the equilibrium manifold, we construct the numerical indicator. Based on several works on the coupling of different hyperbolic models, an original numerical method of dynamic model adaptation is proposed. We prove that this multiscale method preserves invariant domains and that the entropy of the numerical solution decreases with respect to time. The reliability of the adaptation procedure is assessed on various 1D and 2D test cases coming from two-phase flow modeling.  相似文献   
45.
In this paper, we propose a metaheuristic for solving an original scheduling problem with auxiliary resources in a photolithography workshop of a semiconductor plant. The photolithography workshop is often a bottleneck, and improving scheduling decisions in this workshop can help to improve indicators of the whole plant. Two optimization criteria are separately considered: the weighted flow time (to minimize) and the number of products that are processed (to maximize). After stating the problem and giving some properties on the solution space, we show how these properties help us to efficiently solve the problem with the proposed memetic algorithm, which has been implemented and tested on large generated instances. Numerical experiments show that good solutions are obtained within a reasonable computational time.  相似文献   
46.
Some supervised tasks are presented with a numerical output but decisions have to be made in a discrete, binarised, way, according to a particular cutoff. This binarised regression task is a very common situation that requires its own analysis, different from regression and classification—and ordinal regression. We first investigate the application cases in terms of the information about the distribution and range of the cutoffs and distinguish six possible scenarios, some of which are more common than others. Next, we study two basic approaches: the retraining approach, which discretises the training set whenever the cutoff is available and learns a new classifier from it, and the reframing approach, which learns a regression model and sets the cutoff when this is available during deployment. In order to assess the binarised regression task, we introduce context plots featuring error against cutoff. Two special cases are of interest, the \( UCE \) and \( OCE \) curves, showing that the area under the former is the mean absolute error and the latter is a new metric that is in between a ranking measure and a residual-based measure. A comprehensive evaluation of the retraining and reframing approaches is performed using a repository of binarised regression problems created on purpose, concluding that no method is clearly better than the other, except when the size of the training data is small.  相似文献   
47.
Increasing numbers of hard environmental constraints are being imposed in urban traffic networks by authorities in an attempt to mitigate pollution caused by traffic. However, it is not trivial for authorities to assess the cost of imposing such hard environmental constraints. This leads to difficulties when setting the constraining values as well as implementing effective control measures. For that reason, quantifying the cost of imposing hard environmental constraints for a certain network becomes crucial. This paper first indicates that for a given network, such cost is not only related to the attribution of environmental constraints but also related to the considered control measures. Next, we present an assessment criterion that quantifies the loss of optimality under the control measures considered by introducing the environmental constraints. The criterion can be acquired by solving a bi-level programming problem with/without environmental constraints. A simple case study shows its practicability as well as the differences between this framework and other frameworks integrating the environmental aspects. This proposed framework is widely applicable when assessing the interaction of traffic and its environmental aspects.  相似文献   
48.
In this paper we introduce a new variant of shape differentiation which is adapted to the deformation of shapes along their normal direction. This is typically the case in the level-set method for shape optimization where the shape evolves with a normal velocity. As all other variants of the original Hadamard method of shape differentiation, our approach yields the same first order derivative. However, the Hessian or second-order derivative is different and somehow simpler since only normal movements are allowed. The applications of this new Hessian formula are twofold. First, it leads to a novel extension method for the normal velocity, used in the Hamilton-Jacobi equation of front propagation. Second, as could be expected, it is at the basis of a Newton optimization algorithm which is conceptually simpler since no tangential displacements have to be considered. Numerical examples are given to illustrate the potentiality of these two applications. The key technical tool for our approach is the method of bicharacteristics for solving Hamilton-Jacobi equations. Our new idea is to differentiate the shape along these bicharacteristics (a system of two ordinary differential equations).  相似文献   
49.
50.
In the density classification problem, a binary cellular automaton (CA) should decide whether an initial configuration contains more 0s or more 1s. The answer is given when all cells of the CA agree on a given state. This problem is known for having no exact solution in the case of binary deterministic one-dimensional CA. We investigate how randomness in CA may help us solve the problem. We analyse the behaviour of stochastic CA rules that perform the density classification task. We show that describing stochastic rules as a “blend” of deterministic rules allows us to derive quantitative results on the classification time and the classification time of previously studied rules. We introduce a new rule whose effect is to spread defects and to wash them out. This stochastic rule solves the problem with an arbitrary precision, that is, its quality of classification can be made arbitrarily high, though at the price of an increase of the convergence time. We experimentally demonstrate that this rule exhibits good scaling properties and that it attains qualities of classification never reached so far.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号