首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4569篇
  免费   124篇
  国内免费   3篇
电工技术   46篇
综合类   4篇
化学工业   1055篇
金属工艺   84篇
机械仪表   77篇
建筑科学   156篇
矿业工程   17篇
能源动力   126篇
轻工业   339篇
水利工程   66篇
石油天然气   19篇
无线电   233篇
一般工业技术   750篇
冶金工业   1153篇
原子能技术   30篇
自动化技术   541篇
  2021年   70篇
  2020年   42篇
  2019年   53篇
  2018年   63篇
  2017年   70篇
  2016年   72篇
  2015年   63篇
  2014年   102篇
  2013年   264篇
  2012年   137篇
  2011年   214篇
  2010年   189篇
  2009年   166篇
  2008年   192篇
  2007年   176篇
  2006年   187篇
  2005年   155篇
  2004年   139篇
  2003年   142篇
  2002年   113篇
  2001年   86篇
  2000年   84篇
  1999年   87篇
  1998年   118篇
  1997年   103篇
  1996年   97篇
  1995年   94篇
  1994年   98篇
  1993年   83篇
  1992年   89篇
  1991年   43篇
  1990年   65篇
  1989年   58篇
  1988年   59篇
  1987年   50篇
  1986年   52篇
  1985年   82篇
  1984年   77篇
  1983年   65篇
  1982年   51篇
  1981年   46篇
  1980年   38篇
  1979年   50篇
  1978年   51篇
  1977年   39篇
  1976年   45篇
  1975年   33篇
  1974年   36篇
  1973年   27篇
  1972年   29篇
排序方式: 共有4696条查询结果,搜索用时 15 毫秒
61.
Tungstoenzymes     
  相似文献   
62.
Massively parallel processors have begun using commodity operating systems that support demand-paged virtual memory. To evaluate the utility of virtual memory, we measured the behavior of seven shared-memory parallel application programs on a simulated distributed-shared-memory machine. Our results (1) confirm the importance of gang CPU scheduling, (2) show that a page-faulting processor should spin rather than invoke a parallel context switch, (3) show that our parallel programs frequently touch most of their data, and (4) indicate that memory, not just CPUs, must be gang scheduled. Overall, our experiments demonstrate that demand paging has limited value on current parallel machines because of the applications' synchronization and memory reference patterns and the machines' high page-fault and parallel context-switch overheads.An earlier version of this paper was presented at Supercomputing '94.This work is supported in part by NSF Presidential Young Investigator Award CCR-9157366; NSF Grants MIP-9225097, CCR-9100968, and CDA-9024618; Office of Naval Research Grant N00014-89-J-1222; Department of Energy Grant DE-FG02-93ER25176; and donations from Thinking Machines Corporation, Xerox Corporation, and Digital Equipment Corporation.  相似文献   
63.
In this paper we solve linear parabolic problems using the three stage noble algorithms. First, the time discretization is approximated using the Laplace transformation method, which is both parallel in time (and can be in space, too) and extremely high order convergent. Second, higher-order compact schemes of order four and six are used for the the spatial discretization. Finally, the discretized linear algebraic systems are solved using multigrid to show the actual convergence rate for numerical examples, which are compared to other numerical solution methods.  相似文献   
64.
Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models.  相似文献   
65.
Experimentation in scientific or medical studies is often carried out in order to model the ‘success’ probability of a binary random variable. Experimental designs for the testing of lack of fit and for estimation, for data with binary responses depending upon covariates which can be controlled by the experimenter, are constructed. It is supposed that the preferred model is one in which the probability of the occurrence of the target outcome depends on the covariates through a link function (logistic, probit, etc.) evaluated at a regression response — a function of the covariates and of parameters to be estimated from the data, once gathered. The fit of this model is to be tested within a broad class of alternatives over which the regression response varies. To this end, the problem is phrased as one of discriminating between the preferred model and the class of alternatives. This, in turn, is a hypothesis testing problem, for which the asymptotic power of the test statistic is directly related to the Kullback-Leibler divergence between the models, averaged over the design. ‘Maximin’ designs, which maximize (through the design) the minimum (among the class of alternative models) value of this power together with a measure of the efficiency of the parameter estimates are also constructed. Several examples are presented in detail; two of these relate to a medical study of fluoxetine versus a placebo in depression patients. The method of design construction is computationally intensive, and involves a steepest descent minimization routine coupled with simulated annealing.  相似文献   
66.
Research using Internet surveys is an emerging field, yet research on the legitimacy of using Internet studies, particularly those targeting sensitive topics, remains under-investigated. The current study builds on the existing literature by exploring the demographic differences between Internet panel and RDD telephone survey samples, as well as differences in responses with regard to experiences of intimate partner violence perpetration and victimization, alcohol and substance use/abuse, PTSD symptomatology, and social support. Analyses indicated that after controlling for demographic differences, there were few differences between the samples in their disclosure of sensitive information, and that the online sample was more socially isolated than the phone sample. Results are discussed in terms of their implications for using Internet samples in research on sensitive topics.  相似文献   
67.
To be relevant to the goals of an enterprise, an industrial software engineering research organization must identify problems of interest to, and find solutions that have an impact on, the software development organizations within the company. Using a systematic measurement program both to identify the problems and assess the impact of solutions is key to satisfying this need. Avaya has had such a program in place for about seven years. Every year we produce an annual report known as the State of Software in Avaya that describes software development trends throughout the company and that contains prioritized recommendations for improving Avaya’s software development capabilities. We start by identifying the goals of the enterprise and use the goal-question-metric approach to identify the measures to compute. The result is insight into the enterprise’s problems in software development, recommendations for improving the development process, and problems that require research to solve. We will illustrate the process with examples from the Software Technology Research Department in Avaya Labs whose purpose is to improve the state of software development and know it. “Know it” means that improvement should be subjectively evident and objectively quantifiable. “Know it” also means that one must be skilled at identifying the data sources, performing the appropriate analyses to answer the questions of interest, and validating that the data are accurate and appropriate for the purpose. Examples will include how and why we developed a measure of software quality that appeals to customers, how and why we are studying the effectiveness of distributed software development, and how and why we are helping development organizations to adopt iterative development methods. We will also discuss how we keep the company and the department apprised of the current strengths and weaknesses of software development in Avaya through the publication of the annual State of Software in Avaya Report. Our purpose is both to provide a model for assessment that others may emulate, based on seven years of experience, and to spotlight analyses and conclusions that we feel are common to software development today.  相似文献   
68.
The detailed implementation and analysis of a finite element multigrid scheme for the solution of elliptic optimal control problems is presented. A particular focus is in the definition of smoothing strategies for the case of constrained control problems. For this setting, convergence of the multigrid scheme is discussed based on the BPX framework. Results of numerical experiments are reported to illustrate and validate the optimal efficiency and robustness of the performance of the present multigrid strategy.  相似文献   
69.
Bryophytes are the dominant ground cover vegetation layer in many boreal forests and in some of these forests the net primary production of bryophytes exceeds the overstory. Therefore it is necessary to quantify their spatial coverage and species composition in boreal forests to improve boreal forest carbon budget estimates. We present results from a small exploratory test using airborne lidar and multispectral remote sensing data to estimate the percentage of ground cover for mosses in a boreal black spruce forest in Manitoba, Canada. Multiple linear regression was used to fit models that combined spectral reflectance data from CASI and indices computed from the SLICER canopy height profile. Three models explained 63-79% of the measured variation of feathermoss cover while three models explained 69-92% of the measured variation of sphagnum cover. Root mean square errors ranged from 3-15% when predicting feathermoss, sphagnum, and total moss ground cover. The results from this case study warrant further testing for a wider range of boreal forest types and geographic regions.  相似文献   
70.
This paper presents a novel method to detect free‐surfaces on particle‐based volume representation. In contrast to most particle‐based free‐surface detection methods, which perform the surface identification based on physical and geometrical properties derived from the underlying fluid flow simulation, the proposed approach only demands the spatial location of the particles to properly recognize surface particles, avoiding even the use of kernels. Boundary particles are identified through a Hidden Point Removal (HPR) operator used for visibility test. Our method is very simple, fast, easy to implement and robust to changes in the distribution of particles, even when facing large deformation of the free‐surface. A set of comparisons against state‐of‐the‐art boundary detection methods show the effectiveness of our approach. The good performance of our method is also attested in the context of fluid flow simulation involving free‐surface, mainly when using level‐sets for rendering purposes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号