首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   682篇
  免费   24篇
  国内免费   5篇
电工技术   19篇
综合类   1篇
化学工业   135篇
金属工艺   13篇
机械仪表   18篇
建筑科学   14篇
矿业工程   2篇
能源动力   17篇
轻工业   46篇
水利工程   9篇
无线电   70篇
一般工业技术   201篇
冶金工业   77篇
原子能技术   9篇
自动化技术   80篇
  2024年   4篇
  2023年   8篇
  2022年   13篇
  2021年   28篇
  2020年   23篇
  2019年   27篇
  2018年   29篇
  2017年   25篇
  2016年   15篇
  2015年   12篇
  2014年   26篇
  2013年   49篇
  2012年   33篇
  2011年   39篇
  2010年   29篇
  2009年   32篇
  2008年   31篇
  2007年   29篇
  2006年   28篇
  2005年   33篇
  2004年   10篇
  2003年   16篇
  2002年   16篇
  2001年   7篇
  2000年   9篇
  1999年   5篇
  1998年   10篇
  1997年   10篇
  1996年   12篇
  1995年   12篇
  1994年   9篇
  1993年   6篇
  1992年   5篇
  1991年   11篇
  1990年   2篇
  1989年   10篇
  1988年   6篇
  1987年   6篇
  1986年   4篇
  1985年   2篇
  1984年   3篇
  1983年   6篇
  1982年   6篇
  1981年   2篇
  1974年   1篇
  1973年   3篇
  1972年   1篇
  1971年   2篇
  1970年   1篇
  1968年   1篇
排序方式: 共有711条查询结果,搜索用时 15 毫秒
51.
Poly(m-aminophenol) (PmAP) was synthesized by the oxidative polymerization of m-aminophenol in sodium hydroxide medium using ammonium persulfate oxidant at room temperature. The synthesized polymer showed very good solution processability as it was well soluble in aqueous sodium hydroxide, dimethylsulfoxide (DMSO), dymethylformamide (DMF), etc. A free-standing film was cast from thermal evaporation of DMSO solution of the synthesized PmAP. The film was then doped with aqueous sodium hydroxide and methanol mixture by solution doping technique at room temperature. The doping conditions were standardized in terms of the DC-conductivity of the doped film. The doped PmAP was characterized by ultraviolet–visible spectroscopy, Fourier transform infrared spectroscopy, Electron dispersion spectroscopy, X-ray diffraction spectroscopy, elemental analysis by atomic absorption spectroscopy, thermogravimetric analysis and DC-electrical conductivity. The DC-electrical conductivity of PmAP film was increased to 2.34 × 10?5 S/cm from <10?12 S/cm due to sodium ion doping. From all the above characterizations it was confirmed that the sodium ions were not the reason for the conduction. The incorporated sodium cation in the polymer through free –OH groups of the polymer chain was induced the electron cloud of the polymer and so the polymer became conducting.  相似文献   
52.
ZnTe quantum dots (QDs) are synthesized at room temperature in a single step by mechanical alloying the stoichiometric equimolar mixture (1:1 mol) of Zn and Te powders under Ar within 1 h of milling. Both XRD and HRTEM characterizations reveal that these QDs having size ∼5 nm contain stacking faults of different kinds. A distinct blue-shift in absorption spectra with decreasing particle size of QDs confirms the quantum size confinement effect (QSCE). It is observed for first time that the QDs with considerable amount of faults can also show the QSCE. Optical band gaps of these QDs increase with increasing milling time and their band gaps can be fine-tuned easily by varying milling time of QDs.  相似文献   
53.
X and gamma rays continue to remain the main contributors to the dose to humans. As these photons of varying energies are encountered in various applications, the study of photon energy response of a dosemeter is an important aspect to ensure the accuracy in dose measurement. Responses of dosemeters have to be experimentally established because for luminescence dosemeters, they depend not only on the effective atomic number (ratio of mass energy absorption coefficients of dosemeter and tissue) of the detector, but also considerably on the luminescence efficiency and the material surrounding the dosemeters. Metal filters are generally used for the compensation of energy dependence below 200 keV and/or to provide photon energy discrimination. It is noted that the contribution to Hp(0.07) could be measured more accurately than Hp(10). For the dosemeters exhibiting high photon energy-dependent response, estimation of the beta component of Hp(0.07) becomes very difficult in the mixed field of beta radiation and photons of energy less than 100 keV. Recent studies have shown that the thickness and the atomic number of metal filters not only affect the response below 200 keV but also cause a significant over-response for high energy (>6 MeV) photons often encountered in the environments of pressurised heavy water reactors and accelerators.  相似文献   
54.
55.
Time of concentration (Tc) is the time required for runoff to travel from the hydraulically most distant point to the outlet of a watershed. The Natural Resources Conservation Service (NRCS) velocity method commonly is used to estimate Tc for hydrologic analysis and design. The NRCS velocity method applies the physical concept that travel time is a function of runoff flow length and flow velocity. Time of concentration for 96 Texas watersheds is independently estimated by three research teams using the NRCS velocity method. Drainage areas of the 96 watersheds considered in the study are approximately 0.8–440.3?km2 (0.3–170?mi2). Digital elevation models having a grid size of 30?m were used to derive watershed physical characteristics using ArcGIS or HEC-GeoHMS. Average channel width was estimated from 1?m or 1?ft digital orthoimagery quarter quadrangle or aerial photography. Each team made independent decisions to estimate parameters needed for different flow segments for the NRCS velocity method. Estimates of time of concentration made by three research teams are compared, and both graphic comparison and statistical summary demonstrate that time of concentration estimated using the NRCS velocity method is subject to large variation, dependent on the analyst-derived parameters used to estimate flow velocity. Because of the propensity for different analysts to arrive at different results, caution is required in application of the NRCS velocity method to estimate Tc.  相似文献   
56.
A three-tiered, enterprise, geographic information system architecture offers a robust, efficient, and secure platform to potentially revolutionize disaster management by enabling support of all of the phases of governmental activity that must occur before, during, and after a disaster. Presently, both publicly and privately initiated, computer-based systems designed for disaster management cannot meet the real-time data access and analysis needs at crucial stages, especially those occurring during an actual disaster. Impediments are reflective of the proprietary, stand alone, and segregated nature of current systems. This paper proposes an integrated, infrastructure management information system as a reliable and effective alternative. Issues related to sharing data, customizing applications, supporting multiple data formats, querying visually, facilitating ubiquitous computing, and upgrading are all addressed. Achieving maximum flexibility and capacity in a disaster management system relies upon recent advances in the following areas: (1) standardized data specifications; (2) middleware services; and (3) Web-enabled, distributed computing. Key resources in designing and implementing such an arrangement are prototyped in a system that was initially designed for addressing disaster management of urban explosions. The critical details of that system are presented herein.  相似文献   
57.
AgInSe2 (AIS) films were grown on n-type Si substrates by the ultra-high-vacuum pulsed laser deposition technique from the AIS target synthesized from high-purity materials. The X-ray diffraction and microscopic studies of the films show that films are textured having terrace-like surface morphology. The optical studies of the films show that the optical band gap is about 1.24 eV. The electrical conductivity of AgInSe2/Si films shows excellent diode characteristics. The photoconductivity of the AgInSe2/Si device shows photocurrent of 2.8 mA at a bias-voltage of − 1 V with an open circuit voltage of 0.15 V. This shows that AIS films are very good absorber material for solar cell technology.  相似文献   
58.
This paper addresses the estimation of a small gallery size that can generate the optimal error estimate and its confidence on a large population (relative to the size of the gallery) which is one of the fundamental problems encountered in performance prediction for object recognition. It uses a generalized two-dimensional prediction model that combines a hypergeometric probability distribution model with a binomial model and also considers the data distortion problem in large populations. Learning is incorporated in the prediction process in order to find the optimal small gallery size and to improve the prediction. The Chernoff and Chebychev inequalities are used as a guide to obtain the small gallery size. During the prediction, the expectation–maximization (EM) algorithm is used to learn the match score and the non-match score distributions that are represented as a mixture of Gaussians. The optimal size of the small gallery is learned by comparing it with the sizes obtained by the statistical approaches and at the same time the upper and lower bounds for the prediction on large populations are obtained. Results for the prediction are presented for the NIST-4 fingerprint database.  相似文献   
59.
Empirical studies in software engineering can involve a variety of organizations, each with their own set of policies and procedures geared at safeguarding the interests and responsibilities of the researchers, students, the collaborating company, the university, and possibly national funding agencies like the National Science Foundation and the National Institute of Health. Each of these organizations have differing goals for participating in these studies and bring widely different cultures and expectations to the table. While policies, procedures, contracts, and agreements set expectations, they by themselves cannot ensure ethical behavior. This position paper describes some of the common approaches to encourage ethical behavior and their limits for enforcing ethical behavior.  相似文献   
60.
In the above-named work (see ibid., vol.38, p.51-7, April 1989), S. Lafiti and A. El-Amawy apply, in a straightforward manner, the method developed by A.D. Singh (1985) to calculate lower bounds for the yield of nonplanar interstitial redundancy topologies of processor arrays with spare processors. In their introduction, they claim that the models suggested by I. Koren and D.K. Pradhan (1987) are highly theoretical since the number of states in the Markov model might be very large and the determination of the transition rates might be intractable. They add that applying some empirical rules, as suggested by Koren and Pradhan (1987), can lead to unrealistic results and may require a large number of computations. They also claim that the model of Koren and Pradhan does not suggest an algorithm to replace faulty elements. They conclude that a simpler model, like the one proposed by Singh, is needed for calculating the yield of fault-tolerant processor arrays. In the present comment, Koren and Pradhan respond to the above comments and attempt to clarify the differences between their yield analysis and that of Singh  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号