首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1785篇
  免费   62篇
  国内免费   6篇
电工技术   31篇
综合类   2篇
化学工业   660篇
金属工艺   64篇
机械仪表   52篇
建筑科学   56篇
矿业工程   10篇
能源动力   82篇
轻工业   102篇
水利工程   3篇
石油天然气   1篇
武器工业   3篇
无线电   135篇
一般工业技术   246篇
冶金工业   45篇
原子能技术   7篇
自动化技术   354篇
  2022年   4篇
  2021年   104篇
  2020年   23篇
  2019年   34篇
  2018年   49篇
  2017年   46篇
  2016年   72篇
  2015年   47篇
  2014年   64篇
  2013年   137篇
  2012年   98篇
  2011年   96篇
  2010年   110篇
  2009年   76篇
  2008年   92篇
  2007年   70篇
  2006年   53篇
  2005年   58篇
  2004年   52篇
  2003年   50篇
  2002年   44篇
  2001年   34篇
  2000年   21篇
  1999年   26篇
  1998年   17篇
  1997年   22篇
  1996年   28篇
  1995年   16篇
  1994年   24篇
  1993年   19篇
  1992年   11篇
  1991年   18篇
  1990年   19篇
  1989年   9篇
  1988年   21篇
  1987年   16篇
  1986年   12篇
  1985年   12篇
  1984年   22篇
  1983年   24篇
  1982年   18篇
  1981年   23篇
  1980年   14篇
  1979年   13篇
  1978年   8篇
  1977年   9篇
  1976年   2篇
  1975年   7篇
  1974年   3篇
  1973年   2篇
排序方式: 共有1853条查询结果,搜索用时 171 毫秒
61.
We consider deterministic broadcasting in radio networks whose nodes have full topological information about the network. The aim is to design a polynomial algorithm, which, given a graph G with source s, produces a fast broadcast scheme in the radio network represented by G. The problem of finding a fastest broadcast scheme for a given graph is NP-hard, hence it is only possible to get an approximation algorithm. We give a deterministic polynomial algorithm which produces a broadcast scheme of length , for every n-node graph of diameter D, thus improving a result of Gąsieniec et al. (PODC 2005) [17] and solving a problem stated there. Unless the inclusion NP BPTIME( holds, the length of a polynomially constructible deterministic broadcast scheme is optimal.A preliminary version of this paper (with a weaker result) appeared in the Proc. 7th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX’2004), August 2004, Harvard University, Cambridge, USA, LNCS 3122, 171–182. Research of the second author supported in part by NSERC discovery grant and by the Research Chair in Distributed Computing of the Université du Québec en Outaouais. Part of this work was done during the second author’s visit at the Max-Planck-Institut für Informatik.  相似文献   
62.
Kernel PCA for Feature Extraction and De-Noising in Nonlinear Regression   总被引:4,自引:0,他引:4  
In this paper, we propose the application of the Kernel Principal Component Analysis (PCA) technique for feature selection in a high-dimensional feature space, where input variables are mapped by a Gaussian kernel. The extracted features are employed in the regression problems of chaotic Mackey–Glass time-series prediction in a noisy environment and estimating human signal detection performance from brain event-related potentials elicited by task relevant signals. We compared results obtained using either Kernel PCA or linear PCA as data preprocessing steps. On the human signal detection task, we report the superiority of Kernel PCA feature extraction over linear PCA. Similar to linear PCA, we demonstrate de-noising of the original data by the appropriate selection of various nonlinear principal components. The theoretical relation and experimental comparison of Kernel Principal Components Regression, Kernel Ridge Regression and ε-insensitive Support Vector Regression is also provided.  相似文献   
63.
Most of the algorithms for blind separation/extraction and independent component analysis (ICA) can not separate mixtures of sources with extremely low kurtosis or colored Gaussian sources. Moreover, to separate mixtures of super- and sub-Gaussian signals, it is necessary to use adaptive (time-variable) or switching nonlinearities which are controlled via computationally intensive measures, such as estimation of the sign of kurtosis of extracted signals. In this paper, we develop a very simple neural network model and an efficient on-line adaptive algorithm that sequentially extract temporally correlated sources with arbitrary distributions, including colored Gaussian sources and sources with extremely low values (or even zero) of kurtosis. The validity and performance of the algorithm have been confirmed by extensive computer simulation experiments.  相似文献   
64.
Herman’s algorithm is a synchronous randomized protocol for achieving self-stabilization in a token ring consisting of N processes. The interaction of tokens makes the dynamics of the protocol very difficult to analyze. In this paper we study the distribution of the time to stabilization, assuming that there are three tokens in the initial configuration. We show for arbitrary N and for an arbitrary timeout t that the probability of stabilization within time t is minimized by choosing as the initial three-token configuration the configuration in which the tokens are placed equidistantly on the ring. Our result strengthens a corollary of a theorem of McIver and Morgan (Inf. Process Lett. 94(2): 79–84, 2005), which states that the expected stabilization time is minimized by the equidistant configuration.  相似文献   
65.
To compare the relative impact of dietary lauric acid (12∶0) and palmitic acid (16∶0) on plasma lipids, two fat-sensitive species, Mongolian gerbils and cebus monkeys, were fed cholesterol-free, purified diets enriched with either 12∶0-rich or 16∶0-rich fats, while all other fatty acids were held constant by selective blending of up to five natural fats or oils. The two gerbil diets (40 en% from fat) allowed for an 8 en% exchange between 12∶0 and 16∶0, and the monkey diets (31 en% from fat) allowed for 6 en% exchange beteen these two fatty acids. Eight gerbils received the diets for eight weeks, and 12 cebus monkeys were fed each diet in a cross-over design for up to 22 wk. Both diets resulted in similar plasma cholesterol, triglyceride, and high density lipoprotein cholesterol concentrations within each species. Additionally, separation of cebus lipoproteins by discontinuous density-gradient ultracentrifugation failed to show any dietary differences in concentration or composition of the three major lipoprotein classes (d<1.019, 1.019–1.055, and 1.055–1.168 g/mL). Thus, in two species sensitive to manipulations in dietary fat while consuming cholesterol-free diets, 16∶0 was not hypercholesterolemic relative to 12∶0. Based on a paper presented at the PORIM International Palm Oil Congress (PIPOC) held in Kuala Lumpur, Malaysia, September 1993.  相似文献   
66.
In this paper the conjugate fluid flow and energy transport problem (involving conduction-convection-radiation heat transfer) resulting from the Czochralski crystal growth process is analysed. The solidifying material is treated as a pure and semitransparent substance with material properties depending neither on temperature nor on the wavelength. The solution of the problem is obtained iteratively using two computer codes: FLUENT, a commercial CFD package, and BEM-based in-house code capable of analysing the radiative heat transfer in the entire computational domain.Obtained results not only show velocity field and temperature distribution within the bodies under consideration but also demonstrate the influence of thermal radiation on these quantities.  相似文献   
67.
We present a new learning algorithm for the blind separation of independent source signals having non-zero skewness (the 3rd-order cumulant) (the source signals have non-symmetric probability distribution.), from their linear mixtures. It is shown that for a class of source signals whose probability distribution functions is not symmetric, a simple adaptive learning algorithm using quadratic function (f(x)=x2) is very efficient for blind source separation task. It is proved that all stable equilibria of the proposed learning algorithm are desirable solutions. Extensive computer simulation experiments confirmed the validity of the proposed algorithm.  相似文献   
68.
We discuss the rough set approach to some challenging problems that are of great importance for making progress in many applications. Among such challenges are the following ones.  相似文献   
69.
In this paper an image data compression scheme based on Periodic Haar Piecewise-Linear (PHL) transform and quantization tables is proposed. Effectiveness of the compression for different classes of images is evaluated. The comparison of the compression quality using PHL and DCT transforms is given.  相似文献   
70.
In the paper the classical Dugdale model has been generalized taking into account the influence of the specimen thickness, in-plane constraint as well as the effect of the strain hardening on the level of stress distribution within the strip yield zone (SYZ). Modification has been performed utilizing Huber, Mises, Hencky as well as Tresca yield hypotheses and Guo Wanlin Tz coefficient. Results are presented in a form useful for applications. As an example, the modified model has been applied to draw the failure assessment diagram (FAD). New FAD’s have been compared with others adopted from the SINTAP procedures.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号