首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1926篇
  免费   31篇
  国内免费   1篇
电工技术   26篇
化学工业   111篇
金属工艺   9篇
机械仪表   19篇
建筑科学   63篇
能源动力   32篇
轻工业   122篇
水利工程   5篇
石油天然气   13篇
无线电   114篇
一般工业技术   128篇
冶金工业   1171篇
原子能技术   9篇
自动化技术   136篇
  2022年   10篇
  2021年   9篇
  2019年   13篇
  2018年   18篇
  2017年   16篇
  2016年   21篇
  2015年   12篇
  2014年   21篇
  2013年   18篇
  2012年   30篇
  2011年   39篇
  2010年   41篇
  2009年   37篇
  2008年   50篇
  2007年   37篇
  2006年   40篇
  2005年   32篇
  2004年   37篇
  2003年   35篇
  2002年   23篇
  2001年   26篇
  2000年   35篇
  1999年   56篇
  1998年   354篇
  1997年   199篇
  1996年   122篇
  1995年   74篇
  1994年   68篇
  1993年   72篇
  1992年   24篇
  1991年   32篇
  1990年   33篇
  1989年   27篇
  1988年   33篇
  1987年   27篇
  1986年   16篇
  1985年   22篇
  1984年   12篇
  1982年   6篇
  1981年   10篇
  1980年   14篇
  1978年   7篇
  1977年   32篇
  1976年   64篇
  1974年   4篇
  1973年   5篇
  1968年   6篇
  1967年   5篇
  1966年   4篇
  1965年   4篇
排序方式: 共有1958条查询结果,搜索用时 15 毫秒
941.
We explore the compatibility of empirical trends in various thermodynamic properties of cuprate superconductors with the Bose-Einstein condensation scenario. These trends include the relations between transition temperature, hole concentration and condensate density, the rise and the upper limit of the transition temperature, the dependence of pressure and isotope coefficients on transition temperature, as well as the observed critical behavior, which is reminiscent of three-dimensional systems with a scalar complex order parameter and short-range interactions. For this purpose we consider an interacting charged Bose gas. Due to the high polarizability of the cuprates, the Coulomb interaction is strongly screened. For this reason, the problem of calculating thermodynamic properties becomes essentially equivalent to that of the uncharged gas with short-range interactions. This problem, however, has not been solved either. Nevertheless, in the dilute limit the problem reduces to the ideal Bose gas treated by Schafroth, while in the dense regime condensation and superfluidity are suppressed because bosons of finite extension fill the available volume. This limiting behavior provides an interpolation scheme for the dependence of both transition temperature and zero temperature superfluid density on boson density. On this basis, and relating the hole concentration in the cuprates which corresponds to the boson density and the superfluid density to the square of the inverse London penetration depth, the compatibility of the empirical trends in the cuprates with the Bose gas behavior can be verified. Our analysis reveals remarkable agreement between these trends and the corresponding Bose gas behavior. There is even strong evidence of the most striking implication of this scenario, the dependence of the transition temperature on the zero-temperature superfluid density, which resembles the outline of a fly's wing. This evidence emerges from recentSR data for Tl2Ba2CuO6+ and kinetic inductance measurements for La2–xSrxCuO4 films, revealing that the penetration depths of underdoped and overdoped samples atT c do not differ significantly. In view of this we found considerable evidence of the nature of the superconducting transition in the cuprates, without invoking any specific pairing mechanism.The authors are grateful to J. G. Bednorz, D. Baeriswyl, H. Beck, J. I. Budnick, H. Keller, K. A. Müller, Ch. Niedermeier, and J. J. Rodriguez for valuable discussions.  相似文献   
942.
Multidimensional database technology   总被引:1,自引:0,他引:1  
Pedersen  T.B. Jensen  C.S. 《Computer》2001,34(12):40-46
Multidimensional database technology is a key factor in the interactive analysis of large amounts of data for decision making purposes. In contrast to previous technologies, these databases view data as multidimensional cubes that are particularly well suited for data analysis. Multidimensional models categorize data either as facts with associated numerical measures or as textual dimensions that characterize the facts. Queries aggregate measure values over a range of dimension values to provide results such as total sales per month of a given product. Multidimensional database technology is being applied to distributed data and to new types of data that current technology often cannot adequately analyze. For example, classic techniques such as preaggregation cannot ensure fast query response times when data-such as that obtained from sensors or GPS-equipped moving objects-changes continuously. Multidimensional database technology will increasingly be applied where analysis results are fed directly into other systems, thereby eliminating humans from the loop. When coupled with the need for continuous updates, this context poses stringent performance requirements not met by current technology  相似文献   
943.
Optical techniques are finding widespread use in analytical chemistry for chemical and bio-chemical analysis. During the past decade, there has been an increasing emphasis on miniaturization of chemical analysis systems and naturally this has stimulated a large effort in integrating microfluidics and optics in lab-on-a-chip microsystems. This development is partly defining the emerging field of optofluidics. Scaling analysis and experiments have demonstrated the advantage of micro-scale devices over their macroscopic counterparts for a number of chemical applications. However, from an optical point of view, miniaturized devices suffer dramatically from the reduced optical path compared to macroscale experiments, e.g. in a cuvette. Obviously, the reduced optical path complicates the application of optical techniques in lab-on-a-chip systems. In this paper we theoretically discuss how a strongly dispersive photonic crystal environment may be used to enhance the light-matter interactions, thus potentially compensating for the reduced optical path in lab-on-a-chip systems. Combining electromagnetic perturbation theory with full-wave electromagnetic simulations we address the prospects for achieving slow-light enhancement of Beer–Lambert–Bouguer absorption, photonic band-gap based refractometry, and high-Q cavity sensing. Invited paper for the “Optofluidics” special issue edited by Prof. David Erickson.  相似文献   
944.
A reaction path including transition states is generated for the Dowd mechanism [P. Dowd, R. Hershlne, S.W. Ham, S. Naganathan. Vitamin K and energy transduction: a base strength amplification mechanism. Science 269 (2005) 1684-1691] of action for Vitamin K carboxylase (VKC) using quantum chemical methods (B3LYP/6-311G**). VKC, an essential enzyme in mammalian systems, catalyzes the conversion of hydroquinone form of Vitamin K to the epoxide form in the presence of oxygen. An intermediate species of the oxidation of Vitamin K, an alkoxide, acts apparently to abstract the gamma hydrogen from specifically located glutamate residues. We are able to follow the Dowd proposed path to generate this alkoxide species. The geometries of the proposed model intermediates and transition states in the mechanism are energy optimized. We find that the most energetic step in the mechanism is the uni-deprotonation of the hydroquinone - once this occurs, there is only a small barrier of 3.5kcal/mol for the interaction of oxygen with the carbon to be attacked - and then the reaction proceeds downhill in free energy to form the critical alkoxide species. The results are consistent with the idea that the enzyme probably acts to facilitate the formation of the epoxide by reducing the energy required to deprotonate the hydroquinone form.  相似文献   
945.
Müller H  Kirkhus B  Pedersen JI 《Lipids》2001,36(8):783-791
The effects of dietary trans fatty acids on serum total and low density lipoprotein (LDL) cholesterol have been evaluated by incorporating trans fatty acids into predictive equations and comparing their effects with the effects of the individual saturated fatty acids 12∶0, 14∶0, and 16∶0. Trans fatty acids from partially hydrogenated soybean oil (TRANS V) and fish oil (TRANS F) were included in previously published equations by constrained regression analysis, allowing slight adjustments of existing coefficients. Prior knowledge about the signs and ordering of the regression coefficients was explicitly incorporated into the regression modeling by adding lower and upper bounds to the coefficients. The amounts of oleic acid (18∶1) and polyunsaturated fatty acids (18∶2, 18∶3) were not sufficiently varied in the studies, and the respective regression coefficients were therefore set equal to those found by Yu et al. [Yu, S., Derr, J., Etherton, T.D., and Kris-Etherton, P.M. (1995) Plasma Cholesterol-Predictive Equations Demonstrate That Stearic Acid Is Neutral and Monounsaturated Fatty Acids Are Hypocholesterolemic, Am. J. Clin. Nutr. 61, 1129–1139]. Stearic acid (18∶0), considered to be neutral, was not included in the equations. The regression analyses were based on results from four controlled dietary studies with a total of 95 participants and including 10 diets differing in fatty acid composition and with 30–38% of energy (E%) as fat. The analyses resulted in the following equations, where the change in cholesterol is expressed in mmol/L and the change in intake of fatty acids is expressed in E%: Δ Total cholesterol=0.01 Δ(12∶0)+0.12 Δ(14∶0)+0.057 Δ(16∶0)+0.039 Δ(TRANS F)+0.031 Δ(TRANS V)−0.0044 Δ(18∶1)−0.017 Δ(18∶2, 18∶3) and ΔLDL cholesterol =0.01 Δ(12∶0)+0.071 Δ(14∶0)+0.047 Δ(16∶0)+0.043 Δ(TRANS F)+0.025 Δ(TRANS V)−0.0044 Δ(18∶1)−0.017 Δ(18∶2, 18∶3). The regression analyses confirm previous findings that 14∶0 is the most hypercholesterolemic fatty acid and indicate that trans fatty acids are less hypercholesterolemic than the saturated fatty acids 14∶0 and 16∶0. TRANS F may be slightly more hypercholesterolemic than TRANS V or there may be other hypercholesterolemic fatty acids in partially hydrogenated fish oil than those included in the equations. The test set used for validation consisted of 22 data points from seven recently published dietary studies. The equation for total cholesterol showed good prediction ability with a correlation coefficient of 0.981 between observed and predicted values. The equation has been used by the Norwegian food industry in reformulating margarines into more healthful products with reduced content of cholesterol-raising fatty acids. These authors have contributed equally to this work.  相似文献   
946.
With design independent loads and only a constrained volume (no local bounds), the same optimal design leads simultaneously to minimum compliance and maximum strength. However, for thermoelastic structures this is not the case and a maximum volume may not be an active constraint for minimum compliance. This is proved for thermoelastic structures by sensitivity analysis of compliance that facilitates localized determination of sensitivities, and the compliance is not identical to the total elastic energy (twice strain energy). An explicit formula for the difference is derived and numerically illustrated with examples. In compliance minimization for thermoelastic structures it may be advantageous to decrease the total volume, but for strength maximization it is argued to keep the total permissible volume. Linear interpolation (no penalization) may to a certain extent be argued for 2D thickness optimized designs, but for 3D design problems interpolation must be included and not only from the penalization point of view to obtain 0–1 designs. Three interpolation types are presented in a uniform manner, including the well known one parameter penalizations, named SIMP and RAMP. An alternative two parameter interpolation in explicit form is preferred, and the influence of interpolation on compliance sensitivity analysis is included. For direct strength maximization the sensitivity analysis of local von Mises stresses is demanding. An applied recursive procedure to obtain uniform energy density is presented in details, and it is shown by examples that the obtained designs are close to fulfilling also strength maximization. Explicit formulas for equivalent thermoelastic loads in 2D and 3D finite element analysis are derived and applied, including the sensitivity analysis.  相似文献   
947.
For pt.I see ibid., vol.40, no.3, p.238-49 (1993). Results of angular spectrum computations are presented for three applications: general decomposition, pressure field calculation, and modeling of pulse-echo measurements. Wherever possible, analytical (exact) results are used as reference for the decomposition results. This permits the accuracy of the angular spectrum decomposition to be evaluated for specific choices of M and N where M is the number of samples across the maximum characteristic length, d, of the source region and N is the total number of samples along each side of the source decomposition plane, and for both sampling techniques. Based on the results for the three applications, guidelines for choosing M and N are presented in a graphical format.  相似文献   
948.
Dual connectivity (DC) allows user equipments (UEs) to receive data simultaneously from different eNodeBs (eNBs) in order to boost the performance in a heterogeneous network with dedicated carrier deployment. Yet, how to efficiently operate with DC opens a number of research questions. In this paper we focus on the case where a macro and a small cell eNBs are inter-connected with traditional backhaul links characterized by certain latency, assuming independent radio resource management (RRM) functionalities residing in each eNB. In order to fully harvest the gain provided by DC, an efficient flow control of data between the involved macro and small cell eNBs is proposed. Moreover, guidelines for the main performance determining RRM algorithms such as UE cell association and packet scheduling are also presented. It is demonstrated how proper configuration of the proposed flow control algorithm offers efficient trade-offs between reducing the probability that one of the eNBs involved in the DC runs out of data and limiting the buffering time. Simulation results show that the performance of DC over traditional backhaul connections is close to that achievable with inter-site carrier aggregation (CA) and virtually zero-latency fronthaul connections, and in any case it is significantly higher compared to the case without DC.  相似文献   
949.
950.
We examined the association between complexity of the main lifetime occupation and changes in cognitive ability in later life. Data on complexity of work with data, people, and things and on 4 cognitive factors (verbal, spatial, memory, and speed) were available from 462 individuals in the longitudinal Swedish Adoption/Twin Study of Aging. Mean age at the first measurement wave was 64.3 years (SD = 7.2), and 65% of the sample had participated in at least three waves of data collection. Occupational complexity with people and data were both correlated with cognitive performance. Individuals with more complex work demonstrated higher mean performance on the verbal, spatial, and speed factors. Latent growth curve analyses indicated that, after correcting for education, only complexity with people was associated with differences in cognitive performance and rate of cognitive change. Continued engagement as a result of occupational complexity with people helped to facilitate verbal function before retirement, whereas a previous high level of complexity of work with people was associated with faster decline after retirement on the spatial factor. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号