首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   174972篇
  免费   16294篇
  国内免费   9386篇
电工技术   13837篇
技术理论   8篇
综合类   17870篇
化学工业   17317篇
金属工艺   7160篇
机械仪表   15771篇
建筑科学   20298篇
矿业工程   6729篇
能源动力   7796篇
轻工业   13447篇
水利工程   7490篇
石油天然气   8279篇
武器工业   2002篇
无线电   9997篇
一般工业技术   17044篇
冶金工业   7021篇
原子能技术   2515篇
自动化技术   26071篇
  2024年   721篇
  2023年   2141篇
  2022年   4361篇
  2021年   5109篇
  2020年   5377篇
  2019年   4495篇
  2018年   4426篇
  2017年   5414篇
  2016年   6550篇
  2015年   6887篇
  2014年   11232篇
  2013年   11272篇
  2012年   12997篇
  2011年   14333篇
  2010年   10276篇
  2009年   10477篇
  2008年   9826篇
  2007年   11790篇
  2006年   10230篇
  2005年   8597篇
  2004年   7269篇
  2003年   6201篇
  2002年   5008篇
  2001年   4116篇
  2000年   3520篇
  1999年   2935篇
  1998年   2524篇
  1997年   2124篇
  1996年   1745篇
  1995年   1438篇
  1994年   1293篇
  1993年   965篇
  1992年   891篇
  1991年   654篇
  1990年   550篇
  1989年   468篇
  1988年   395篇
  1987年   268篇
  1986年   233篇
  1985年   214篇
  1984年   252篇
  1983年   239篇
  1982年   196篇
  1981年   95篇
  1980年   94篇
  1979年   65篇
  1978年   53篇
  1977年   46篇
  1976年   43篇
  1959年   30篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
11.
The estimation of the differences among groups in observational studies is frequently inaccurate owing to a bias caused by differences in the distributions of covariates. In order to estimate the average treatment effects when the treatment variable is binary, Rosenbaum and Rubin [1983. The central role of the propensity score in observational studies for causal effects. Biometrika 70, 41-55] proposed an adjustment method for pre-treatment variables using propensity scores. Imbens [2000. The role of the propensity score in estimating dose-response functions. Biometrika 87, 706-710] extended the propensity score methodology for estimation of average treatment effects with multivalued treatments.However, these studies focused only on estimating the marginal mean structure. In many substantive sciences such as the biological and social sciences, a general estimation method is required to deal with more complex analyses other than regression, such as testing group differences on latent variables. For latent variable models, the EM algorithm or the traditional Monte Carlo methods are necessary. However, in propensity score adjustment, these methods cannot be used because the full distribution is not specified.In this paper, we propose a quasi-Bayesian estimation method for general parametric models that integrate out the distributions of covariates using propensity scores. Although the proposed Bayes estimates are shown to be consistent, they can be calculated by existing Markov chain Monte Carlo methods such as Gibbs sampler. The proposed method is useful to estimate parameters in latent variable models, while the previous methods were unable to provide valid estimates for complex models such as latent variable models.We also illustrated the procedure using the data obtained from the US National Longitudinal Survey of Children and Youth (NLSY1979-2002) for estimating the effect of maternal smoking during pregnancy on the development of the child's cognitive functioning.  相似文献   
12.
The discrimination problem for two normal populations with the same covariance matrix when additional information on the population is available is considered. A study of the robustness properties against training sample contamination of classification rules that incorporate this additional information is performed. These rules have received recently attention where their total misclassification probability (TMP) is proved to be lower than Fisher's linear discriminant rule. The results of a simulation study on the TMP which compares the behaviour of the new rules against Fisher's rule and some of its robustified versions under different types of contamination are presented. These results show that the rules that incorporate the additional information not only have lower TMP, but they also prevent against some types of contamination. In order to achieve prevention from all types of contamination a robustifed version of these rules is recommended.  相似文献   
13.
闫荣春  许罕多 《山西建筑》2008,34(11):241-242
对"比较法"原理和应用情况进行了分析,说明其在房地产开发中能够科学地预测房地产的价格和收益,从而提高财务分析的科学性和权威性,为项目决策提供科学依据,以促进"比较法"在房地产开发中的应用。  相似文献   
14.
Hydroprocessing catalysts based on Ni, Co, Mo and W are used in various refinery processing applications where several deactivation mechanisms become of importance (coke formation, active phase sintering, metals deposition, poisoning) in the catalyst's life cycle. The life cycle of commercial hydroprocessing catalysts is very complex and includes the catalyst production, sulfidation, use, oxidative regeneration followed by re-sulfidation and reuse or, if reuse is not possible, recycling or disposal. To understand the changes in catalyst properties taking place during a life cycle, the catalyst quality in the different stages can be best monitored by using advanced analytical techniques. The catalyst's life cycle is further complicated by numerous technical, environmental and organizational issues involved. In principle, different companies can be involved in each of the life cycle steps. Leading catalyst manufacturers, together with specialized firms, offer refineries a total catalyst management concept, starting with the purchase of the fresh catalyst and ending with its final recycling or disposal. Total catalyst management includes a broad range of services, ensuring optimal timing during the change-out process, reliable, smooth and safe operations, minimal downtime and maximum catalyst and unit performance.  相似文献   
15.
Biodiesel has become an attractive diesel fuel substitute due to its environmental benefits since it can be made from renewable resource. However, the high costs surrounding biodiesel production remains the main problem in making it competitive in the fuel market either as a blend or as a neat fuel. More than 80% of the production cost is associated with the feedstock itself and consequently, efforts are focused on developing technologies capable of using lower-cost feedstocks, such as recycled cooking oils and wastes from animal or vegetable oil processing operations.  相似文献   
16.
Industrial pelletizing of sawdust was carried out as a designed experiment in the factors: sawdust moisture content, fractions of fresh pine, stored pine and spruce. The process parameters and response variables were energy consumption, pellet flow rate, pellet bulk density, durability and moisture content. The final data consisted of twelve industrial scale runs. Because of the many response variables, data evaluation was by principal component analysis of a 12 × 9 data matrix. The two principal component model showed a clustering of samples, with a good reproducibility of the center points. It also showed a positive correlation of energy consumption, bulk density and durability all negatively correlated to flow rate and moisture content. The stored pine was more related to high durability and bulk density. The role of the spruce fraction was unclear. The design matrix, augmented with the process parameters was a 12 × 6 matrix. Partial least squares regression showed excellent results for pellet moisture content and bulk density. The model for durability was promising. A 12 × 21 data matrix of fatty- and resin acid concentrations measured by GC–MS showed the differences between fresh and stored pine very clearly. The influence of the spruce fraction was less clear. However, the influence of the fatty- and resin acids on the pelletizing process could not be confirmed, indicating that other differences between fresh and stored pine sawdust have to be investigated. This work shows that it is possible to design the pelletizing process for moderate energy consumption and high pellet quality.  相似文献   
17.
Variation exists in all processes. Significant work has been done to identify and remove sources of variation in manufacturing processes resulting in large returns for companies. However, business process optimization is an area that has a large potential return for a company. Business processes can be difficult to optimize due to the nature of the output variables associated with them. Business processes tend to have output variables that are binary, nominal or ordinal. Examples of these types of output include whether a particular event occurred, a customer's color preference for a new product and survey questions that assess the extent of the survey respondent's agreement with a particular statement. Output variables that are binary, nominal or ordinal cannot be modeled using ordinary least‐squares regression. Logistic regression is a method used to model data where the output is binary, nominal or ordinal. This article provides a review of logistic regression and demonstrates its use in modeling data from a business process involving customer feedback. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   
18.
This paper deals with the decomposition analysis of energy-related CO2 emissions in Greece from 1990 to 2002. The Arithmetic Mean Divisia Index (AMDI) and the Logarithmic Mean Divisia Index (LMDI) techniques are applied and changes in CO2 emissions are decomposed into four factors: income effect, energy intensity effect, fuel share effect and population effect. The period-wise and time series analyses show that the biggest contributor to the rise in CO2 emissions in Greece is the income effect; on the contrary, the energy intensity effect is mainly responsible for the decrease in CO2 emissions. A comparison of the results of the two techniques gave an insight in the intricacies of energy decomposition. Finally, conclusions and future areas of research are presented.  相似文献   
19.
Many approaches have been proposed to enhance software productivity and reliability. These approaches typically fall into three categories: the engineering approach, the formal approach and the knowledge-based approach. But the optimal gain in software productivity cannot be obtained if one relies on only one of these approaches. This paper describes the work in knowledge-based software engineering conducted by the authors for the past 10 years. The final goal of the research is to develop a paradigm for software engineering which integrates the three approaches mentioned above. A knowledge-based tool which can support the whole process of software development is provided in this paper.  相似文献   
20.
This paper describes an effective analysis of magnetic shielding based on homogenization. The analyses become time‐consuming if the problems include the magnetic substances having fine structure. The homogenization of the structure makes it possible to analyze effectively the magnetic fields. The authors introduce a method to estimate the effective permeability of the homogenized substance. This method can be applied to any periodic structure made of magnetic substance. The magnetic shielding effects by the structures against direct‐current (DC) fields generated by electric railways are analyzed by using the present method. As a result, it is found that the overhead way and the protective fence near the railway work as a magnetic shield, whose effects can be improved by appropriate arrangement of those constructions. © 2007 Wiley Periodicals, Inc. Electr Eng Jpn, 160(4): 7–15, 2007; Published online in Wiley InterScience ( www.interscience.wiley.com ). DOI 10.1002/eej.20310  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号