首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6991篇
  免费   290篇
  国内免费   9篇
电工技术   62篇
综合类   8篇
化学工业   1530篇
金属工艺   166篇
机械仪表   178篇
建筑科学   332篇
矿业工程   22篇
能源动力   147篇
轻工业   736篇
水利工程   50篇
石油天然气   7篇
无线电   437篇
一般工业技术   1124篇
冶金工业   1558篇
原子能技术   58篇
自动化技术   875篇
  2023年   49篇
  2022年   94篇
  2021年   156篇
  2020年   121篇
  2019年   115篇
  2018年   222篇
  2017年   208篇
  2016年   221篇
  2015年   173篇
  2014年   212篇
  2013年   440篇
  2012年   365篇
  2011年   403篇
  2010年   290篇
  2009年   277篇
  2008年   318篇
  2007年   268篇
  2006年   228篇
  2005年   210篇
  2004年   187篇
  2003年   181篇
  2002年   126篇
  2001年   105篇
  2000年   129篇
  1999年   118篇
  1998年   513篇
  1997年   301篇
  1996年   204篇
  1995年   120篇
  1994年   111篇
  1993年   114篇
  1992年   33篇
  1991年   44篇
  1990年   30篇
  1989年   39篇
  1988年   41篇
  1987年   33篇
  1986年   20篇
  1985年   38篇
  1984年   25篇
  1983年   27篇
  1982年   33篇
  1981年   45篇
  1980年   25篇
  1979年   21篇
  1978年   30篇
  1977年   45篇
  1976年   45篇
  1975年   23篇
  1973年   13篇
排序方式: 共有7290条查询结果,搜索用时 15 毫秒
61.
In numerous industrial CFD applications, it is usual to use two (or more) different codes to solve a physical phenomenon: where the flow is a priori assumed to have a simple behavior, a code based on a coarse model is applied, while a code based on a fine model is used elsewhere. This leads to a complex coupling problem with fixed interfaces. The aim of the present work is to provide a numerical indicator to optimize to position of these coupling interfaces. In other words, thanks to this numerical indicator, one could verify if the use of the coarser model and of the resulting coupling does not introduce spurious effects. In order to validate this indicator, we use it in a dynamical multiscale method with moving coupling interfaces. The principle of this method is to use as much as possible a coarse model instead of the fine model in the computational domain, in order to obtain an accuracy which is comparable with the one provided by the fine model. We focus here on general hyperbolic systems with stiff relaxation source terms together with the corresponding hyperbolic equilibrium systems. Using a numerical Chapman–Enskog expansion and the distance to the equilibrium manifold, we construct the numerical indicator. Based on several works on the coupling of different hyperbolic models, an original numerical method of dynamic model adaptation is proposed. We prove that this multiscale method preserves invariant domains and that the entropy of the numerical solution decreases with respect to time. The reliability of the adaptation procedure is assessed on various 1D and 2D test cases coming from two-phase flow modeling.  相似文献   
62.
Inspired by the relational algebra of data processing, this paper addresses the foundations of data analytical processing from a linear algebra perspective. The paper investigates, in particular, how aggregation operations such as cross tabulations and data cubes essential to quantitative analysis of data can be expressed solely in terms of matrix multiplication, transposition and the Khatri–Rao variant of the Kronecker product. The approach offers a basis for deriving an algebraic theory of data consolidation, handling the quantitative as well as qualitative sides of data science in a natural, elegant and typed way. It also shows potential for parallel analytical processing, as the parallelization theory of such matrix operations is well acknowledged.  相似文献   
63.
In this paper, we propose a metaheuristic for solving an original scheduling problem with auxiliary resources in a photolithography workshop of a semiconductor plant. The photolithography workshop is often a bottleneck, and improving scheduling decisions in this workshop can help to improve indicators of the whole plant. Two optimization criteria are separately considered: the weighted flow time (to minimize) and the number of products that are processed (to maximize). After stating the problem and giving some properties on the solution space, we show how these properties help us to efficiently solve the problem with the proposed memetic algorithm, which has been implemented and tested on large generated instances. Numerical experiments show that good solutions are obtained within a reasonable computational time.  相似文献   
64.
Some supervised tasks are presented with a numerical output but decisions have to be made in a discrete, binarised, way, according to a particular cutoff. This binarised regression task is a very common situation that requires its own analysis, different from regression and classification—and ordinal regression. We first investigate the application cases in terms of the information about the distribution and range of the cutoffs and distinguish six possible scenarios, some of which are more common than others. Next, we study two basic approaches: the retraining approach, which discretises the training set whenever the cutoff is available and learns a new classifier from it, and the reframing approach, which learns a regression model and sets the cutoff when this is available during deployment. In order to assess the binarised regression task, we introduce context plots featuring error against cutoff. Two special cases are of interest, the \( UCE \) and \( OCE \) curves, showing that the area under the former is the mean absolute error and the latter is a new metric that is in between a ranking measure and a residual-based measure. A comprehensive evaluation of the retraining and reframing approaches is performed using a repository of binarised regression problems created on purpose, concluding that no method is clearly better than the other, except when the size of the training data is small.  相似文献   
65.
Increasing numbers of hard environmental constraints are being imposed in urban traffic networks by authorities in an attempt to mitigate pollution caused by traffic. However, it is not trivial for authorities to assess the cost of imposing such hard environmental constraints. This leads to difficulties when setting the constraining values as well as implementing effective control measures. For that reason, quantifying the cost of imposing hard environmental constraints for a certain network becomes crucial. This paper first indicates that for a given network, such cost is not only related to the attribution of environmental constraints but also related to the considered control measures. Next, we present an assessment criterion that quantifies the loss of optimality under the control measures considered by introducing the environmental constraints. The criterion can be acquired by solving a bi-level programming problem with/without environmental constraints. A simple case study shows its practicability as well as the differences between this framework and other frameworks integrating the environmental aspects. This proposed framework is widely applicable when assessing the interaction of traffic and its environmental aspects.  相似文献   
66.
In this paper we introduce a new variant of shape differentiation which is adapted to the deformation of shapes along their normal direction. This is typically the case in the level-set method for shape optimization where the shape evolves with a normal velocity. As all other variants of the original Hadamard method of shape differentiation, our approach yields the same first order derivative. However, the Hessian or second-order derivative is different and somehow simpler since only normal movements are allowed. The applications of this new Hessian formula are twofold. First, it leads to a novel extension method for the normal velocity, used in the Hamilton-Jacobi equation of front propagation. Second, as could be expected, it is at the basis of a Newton optimization algorithm which is conceptually simpler since no tangential displacements have to be considered. Numerical examples are given to illustrate the potentiality of these two applications. The key technical tool for our approach is the method of bicharacteristics for solving Hamilton-Jacobi equations. Our new idea is to differentiate the shape along these bicharacteristics (a system of two ordinary differential equations).  相似文献   
67.
This paper analyzed the effects of security risk factors fit for cloud computing paradigm on the acceptance of enterprise cloud service with intent to illuminate the factors for vitalizing the adoption of corporate cloud services in the future. The acceptance intention was set as a dependent variable. Independent variables were set in reference to the technology acceptance theory. Security risks were categorized into compliance risk, information leakage risk, troubleshooting risk and service discontinuation risk to design a model for analysis.  相似文献   
68.
69.
In the density classification problem, a binary cellular automaton (CA) should decide whether an initial configuration contains more 0s or more 1s. The answer is given when all cells of the CA agree on a given state. This problem is known for having no exact solution in the case of binary deterministic one-dimensional CA. We investigate how randomness in CA may help us solve the problem. We analyse the behaviour of stochastic CA rules that perform the density classification task. We show that describing stochastic rules as a “blend” of deterministic rules allows us to derive quantitative results on the classification time and the classification time of previously studied rules. We introduce a new rule whose effect is to spread defects and to wash them out. This stochastic rule solves the problem with an arbitrary precision, that is, its quality of classification can be made arbitrarily high, though at the price of an increase of the convergence time. We experimentally demonstrate that this rule exhibits good scaling properties and that it attains qualities of classification never reached so far.  相似文献   
70.
Sampling is a key step in the analysis of chemical compounds. It is particularly important in the environmental field, for example for wastewater effluents, wet-weather discharges or streams in which the flows and concentrations vary greatly over time. In contrast to the improvements that have occurred in analytical measurement, developments in the field of sampling are less active. However, sampling errors may exceed by an order of magnitude those related to analytical processes. We proposed an Internet-based application based on a sampling theory to identify and quantify the errors in the process of taking samples. This general theory of sampling, already applied to different areas, helps to answer questions related to the number of samples, their volume, their representativeness, etc. The use of the internet to host this application facilitates use of theoretical tools and raise awareness of the uncertainties related to sampling. An example is presented, which highlights the importance of the sampling step in the quality of analytical results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号