首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   426篇
  免费   13篇
电工技术   14篇
化学工业   115篇
金属工艺   7篇
机械仪表   8篇
建筑科学   29篇
能源动力   13篇
轻工业   23篇
水利工程   2篇
无线电   15篇
一般工业技术   72篇
冶金工业   66篇
自动化技术   75篇
  2023年   9篇
  2022年   8篇
  2021年   9篇
  2020年   5篇
  2019年   5篇
  2018年   6篇
  2017年   6篇
  2016年   9篇
  2015年   5篇
  2014年   13篇
  2013年   27篇
  2012年   14篇
  2011年   21篇
  2010年   14篇
  2009年   16篇
  2008年   12篇
  2007年   14篇
  2006年   10篇
  2005年   17篇
  2004年   9篇
  2003年   11篇
  2002年   13篇
  2001年   10篇
  2000年   8篇
  1999年   8篇
  1998年   19篇
  1997年   11篇
  1996年   11篇
  1995年   15篇
  1994年   7篇
  1993年   14篇
  1992年   4篇
  1991年   7篇
  1990年   4篇
  1989年   9篇
  1988年   3篇
  1987年   5篇
  1986年   5篇
  1985年   8篇
  1984年   2篇
  1983年   5篇
  1981年   7篇
  1980年   3篇
  1978年   3篇
  1977年   4篇
  1976年   5篇
  1975年   2篇
  1971年   2篇
  1965年   1篇
  1958年   1篇
排序方式: 共有439条查询结果,搜索用时 11 毫秒
41.
As machines and programs have become more complex, the process of programming applications that can exploit the power of high-performance systems has become more difficult and correspondingly more labor-intensive. This has substantially widened the software gap—the discrepancy between the need for new software and the aggregate capacity of the workforce to produce it. This problem has been compounded by the slow growth of programming productivity, especially for high-performance programs, over the past two decades. One way to bridge this gap is to make it possible for end users to develop programs in high-level domain-specific programming systems. In the past, a major impediment to the acceptance of such systems has been the poor performance of the resulting applications. To address this problem, we are developing a new compiler-based infrastructure, called TeleGen, that will make it practical to construct efficient domain-specific high-level languages from annotated component libraries. We call these languages telescoping languages, because they can be nested within one another. For programs written in telescoping languages, high performance and reasonable compilation times can be achieved by exhaustively analyzing the component libraries in advance to produce a language processor that recognizes and optimizes library operations as primitives in the language. The key to making this strategy practical is to keep compile times low by generating a custom compiler with extensive built-in knowledge of the underlying libraries. The goal is to achieve compile times that are linearly proportional to the size of the program presented by the user, rather than to the aggregate size of that program plus the base libraries.  相似文献   
42.
The subject of this paper is the direct identification of continuous-time autoregressive moving average (CARMA) models. The topic is viewed from the frequency domain perspective which then turns the reconstruction of the continuous-time power spectral density (CT-PSD) into a key issue. The first part of the paper therefore concerns the approximate estimation of the CT-PSD from uniformly sampled data under the assumption that the model has a certain relative degree. The approach has its point of origin in the frequency domain Whittle likelihood estimator. The discrete- or continuous-time spectral densities are estimated from equidistant samples of the output. For low sampling rates the discrete-time spectral density is modeled directly by its continuous-time spectral density using the Poisson summation formula. In the case of rapid sampling the continuous-time spectral density is estimated directly by modifying its discrete-time counterpart.  相似文献   
43.
Nonlinear black-box modeling in system identification: a unified overview   总被引:7,自引:0,他引:7  
A nonlinear black-box structure for a dynamical system is a model structure that is prepared to describe virtually any nonlinear dynamics. There has been considerable recent interest in this area, with structures based on neural networks, radial basis networks, wavelet networks and hinging hyperplanes, as well as wavelet-transform-based methods and models based on fuzzy sets and fuzzy rules. This paper describes all these approaches in a common framework, from a user's perspective. It focuses on what are the common features in the different approaches, the choices that have to be made and what considerations are relevant for a successful system-identification application of these techniques. It is pointed out that the nonlinear structures can be seen as a concatenation of a mapping form observed data to a regression vector and a nonlinear mapping from the regressor space to the output space. These mappings are discussed separately. The latter mapping is usually formed as a basis function expansion. The basis functions are typically formed from one simple scalar function, which is modified in terms of scale and location. The expansion from the scalar argument to the regressor space is achieved by a radial- or a ridge-type approach. Basic techniques for estimating the parameters in the structures are criterion minimization, as well as two-step procedures, where first the relevant basis functions are determined, using data, and then a linear least-squares step to determine the coordinates of the function approximation. A particular problem is to deal with the large number of potentially necessary parameters. This is handled by making the number of ‘used’ parameters considerably less than the number of ‘offered’ parameters, by regularization, shrinking, pruning or regressor selection.  相似文献   
44.
Lennart Sjberg 《Displays》1987,8(4):210-212
A discussion of the factors involved in the subjective perception of television picture quality is presented. Much of the work in this field is cited, and it is concluded that future work needs to consider a broader range of variables than has been looked at so far.  相似文献   
45.
A general family of tracking algorithms for linear regression models is studied. It includes the familiar least mean square gradient approach, recursive least squares, and Kalman filter based estimators. The exact expressions for the quality of the obtained estimates are complicated. Approximate, and easy-to-use, expressions for the covariance matrix of the parameter tracking error are developed. These are applicable over the whole time interval, including the transient, and the approximation error can be explicitly calculated  相似文献   
46.
The density of single spray-dried granules has been determined with a new method based on atomic force microscopy (AFM). Spherical granules with a well-defined diameter are attached to the AFM cantilever, which acts as a beam-type spring, and the mass of a granule is estimated from the shift in the resonant frequency. The error of the measurements associated with the method was estimated to vary between 1% and 5%, depending on the size and shape of the granule. Density measurements of spray-dried WC–Co granules are presented, and the effect of a polymeric binder and dispersant on the consolidation during drying is discussed.  相似文献   
47.
Regressor selection with the analysis of variance method   总被引:1,自引:0,他引:1  
Identification of non-linear dynamical models of a black box nature involves both structure decisions, i.e., which regressors to use, the selection of a regressor function, and the estimation of the parameters involved. The typical approach in system identification seems to be to mix all these steps, which for example means that the selection of regressors is based on the fits that is achieved for different choices. Alternatively one could then interpret the regressor selection as based on hypothesis tests (F-tests) at a certain confidence level that depends on the data. It would in many cases be desirable to decide which regressors to use independently of the other steps.In this paper we investigate what the well-known method of analysis of variance (ANOVA) can offer for this problem. System identification applications violate many of the ideal conditions for which ANOVA was designed and we study how the method performs under such non-ideal conditions.ANOVA is much faster than a typical parametric estimation method, using e.g. neural networks. It is actually also more reliable, in our tests, in picking the correct structure even under non-ideal conditions. One reason for this may be that ANOVA requires the data set to be balanced, that is, all parts of the regressor space are weighted equally. Just applying tests of fit for the recorded data may give, for structure identification, improper weight to areas with many, or few, samples.  相似文献   
48.
Oceans cover 71 % of Earth's surface and are home to hundreds of thousands of species, many of which are microbial. Knowledge about marine microbes has strongly increased in the past decades due to global sampling expeditions, and hundreds of detailed studies on marine microbial ecology, physiology, and biogeochemistry. However, the translation of this knowledge into biotechnological applications or synthetic biology approaches using marine microbes has been limited so far. This review highlights key examples of marine bacteria in synthetic biology and metabolic engineering, and outlines possible future work based on the emerging marine chassis organisms Vibrio natriegens and Halomonas bluephagenesis. Furthermore, the valorization of algal polysaccharides by genetically enhanced microbes is presented as an example of the opportunities and challenges associated with blue biotechnology. Finally, new roles for marine synthetic biology in tackling pressing global challenges, including climate change and marine pollution, are discussed.  相似文献   
49.
50.
The bottom-up construction of an artificial cell requires the realization of synthetic cell division. Significant progress has been made toward reliable compartment division, yet mechanisms to segregate the DNA-encoded informational content are still in their infancy. Herein, droplets of DNA Y-motifs are formed by liquid–liquid phase separation. DNA droplet segregation is obtained by cleaving the linking component between two populations of DNA Y-motifs. In addition to enzymatic cleavage, photolabile sites are introduced for spatio-temporally controlled DNA segregation in bulk as well as in cell-sized water-in-oil droplets and giant unilamellar lipid vesicles (GUVs). Notably, the segregation process is slower in confinement than in bulk. The ionic strength of the solution and the nucleobase sequences are employed to regulate the segregation dynamics. The experimental results are corroborated in a lattice-based theoretical model which mimics the interactions between the DNA Y-motif populations. Altogether, engineered DNA droplets, reconstituted in GUVs, can represent a strategy toward a DNA segregation module within bottom-up assembled synthetic cells.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号