首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2212篇
  免费   70篇
  国内免费   1篇
电工技术   20篇
综合类   19篇
化学工业   818篇
金属工艺   36篇
机械仪表   35篇
建筑科学   136篇
矿业工程   4篇
能源动力   52篇
轻工业   111篇
水利工程   4篇
石油天然气   3篇
无线电   86篇
一般工业技术   410篇
冶金工业   254篇
原子能技术   24篇
自动化技术   271篇
  2021年   21篇
  2019年   29篇
  2018年   26篇
  2017年   23篇
  2016年   40篇
  2015年   49篇
  2014年   58篇
  2013年   87篇
  2012年   100篇
  2011年   103篇
  2010年   90篇
  2009年   85篇
  2008年   103篇
  2007年   87篇
  2006年   78篇
  2005年   68篇
  2004年   68篇
  2003年   45篇
  2002年   53篇
  2001年   53篇
  2000年   49篇
  1999年   56篇
  1998年   48篇
  1997年   38篇
  1996年   40篇
  1995年   39篇
  1994年   35篇
  1993年   23篇
  1992年   36篇
  1991年   40篇
  1990年   32篇
  1989年   47篇
  1988年   27篇
  1987年   31篇
  1986年   17篇
  1985年   18篇
  1984年   19篇
  1983年   27篇
  1982年   25篇
  1981年   22篇
  1980年   24篇
  1979年   20篇
  1978年   35篇
  1977年   27篇
  1976年   25篇
  1975年   31篇
  1974年   33篇
  1973年   32篇
  1971年   17篇
  1970年   21篇
排序方式: 共有2283条查询结果,搜索用时 203 毫秒
131.
Bestimmung des Parameters pe (tion = 0,5) zur Kennzeichnung der Elektronenteilleitfähigkeit in festen Elektrolyten. Anwendung einer abgewandelten Polarisationsmethode mit EMK-Messung. Angabe der Parameter pe für Festelektrolyte aus ZrO2 mit einem Molenbruch xCaO = 0,14 und aus ThO2 mit einem Molenbruch bei Temperaturen zwischen 1200 und 1650 °C. Berechnung korrigierter Eichkurven für die elektrochemische Messung der Sauerstoffaktivitäten in Stahlschmelzen. Empfehlung einer bevorzugten Anwendung von ThO2(Y2O3)-Elektrolyten zur Messung kleinster Sauerstoffaktivitäten.  相似文献   
132.
Ohne Zusammenfassung  相似文献   
133.
Copper in drinking water has been associated with Non-Indian Childhood Cirrhosis (NICC), a form of early childhood liver cirrhosis. This epidemiological study examines the exposition of infants to increased copper concentrations through drinking water from public water supplies in Berlin, Germany, and if this dietary copper intake can cause liver damage in early childhood. In total, water samples from 2944 households with infants were tested for copper. Mean copper concentrations in the two different types of collected composite samples were 0.44 and 0.56 mg/l, respectively. Families having a copper concentration at or above 0.8 mg/l in one or both of the composite samples (29.9% of all sampled households) and a defined minimum ingestion of tap water of their infant were recommended to undergo a paediatric examination. Nearly every of the 541 recommended infants were examined by a local paediatrician and of these 183 received a blood serum analysis, too. None of the infants had clear signs of a liver disease although a few serum parameters lay outside the accompanying reference range and abdominal ultrasound imaging gave slightly unusual results in five cases. Additionally, no signs of a negative health effect could be found in the statistical analysis of the serum parameters GOT, GPT, GGT, total bilirubin, serum copper, or ceruloplasmin in relation to estimated daily and total copper intakes of the infants from tap water. No dose relation of serum parameters and estimated copper intakes could be established. From the results of the study, no confirmed indication of a liver malfunction in infants whose food had been prepared using tap water with an elevated copper concentration could be found and, therefore, no indication of a hazard due to copper pipes connected to public water supplies could be detected.  相似文献   
134.
135.
The verification of security protocols has attracted a lot of interest in the formal methods community, yielding two main verification approaches: i) state exploration, e.g. FDR [Gavin Lowe. Breaking and fixing the needham-schroeder public-key protocol using FDR. In TACAs'96: Proceedings of the Second International Workshop on Tools and Algorithms for Construction and Analysis of Systems, pages 147–166, London, UK, 1996. Springer-Verlag] and OFMC [A.D. Basin, S. Mödersheim, and L. Viganò. An on-the-fly model-checker for security protocol analysis. In D. Gollmann and E. Snekkenes, editors, ESORICS'03: 8th European Symposium on Research in Computer Security, number 2808 in Lecture Notes in Computer Science, pages 253–270, Gjøvik, Norway, 2003. Springer-Verlag]; and ii) theorem proving, e.g. the Isabelle inductive method [Lawrence C. Paulson. The inductive approach to verifying cryptographic protocols. Journal in Computer Security, 6(1-2):85–128, 1998] and Coral [G. Steel, A. Bundy, and M. Maidl. Attacking the asokan-ginzboorg protocol for key distribution in an ad-hoc bluetooth network using coral. In H. König, M. Heiner, and A. Wolisz, editors, IFIP TC6 /WG 6.1: Proceedings of 23rd IFIP International Conference on Formal Techniques for Networked and Distributed Systems, volume 2767, pages 1–10, Berlin, Germany, 2003. FORTE 2003 (work in progress papers)]. Complementing formal methods, Abadi and Needham's principles aim to guide the design of security protocols in order to make them simple and, hopefully, correct [M. Abadi and R. Needham. Prudent engineering practice for cryptographic protocols. IEEE Transactions on Software Engineering, 22(1):6–15, 1996]. We are interested in a problem related to verification but far less explored: the correction of faulty security protocols. Experience has shown that the analysis of counterexamples or failed proof attempts often holds the key to the completion of proofs and for the correction of a faulty model. In this paper, we introduce a method for patching faulty security protocols that are susceptible to an interleaving-replay attack. Our method makes use of Abadi and Needham's principles for the prudent engineering practice for cryptographic protocols in order to guide the location of the fault in a protocol as well as the proposition of candidate patches. We have run a test on our method with encouraging results. The test set includes 21 faulty security protocols borrowed from the Clark-Jacob library [J. Clark and J. Jacob. A survey of authentication protocol literature: Version 1.0. Technical report, Department of Computer Science, University of York, November 1997. A complete specification of the Clark-Jacob library in CAPSL is available at http://www.cs.sri.com/millen/capsl/].  相似文献   
136.
市场分割和用户个性化迫使生产性企业必须在更短的创新周期和产品寿命内具有更强的变化与派生多样性--这是一项艰巨的开发与生产任务.  相似文献   
137.
Already in the design process of automatic transmissions, robustness should be considered as important design goal to assure high shift quality and reliability in the presence of environmental, operating or manufacturing uncertainties. Beside the implementation of adaption algorithms for clutch control, this may also be achieved by using robust design strategies. By such a procedure, mean values for best mean shift quality, and variances for highest robustness against change of system parameters are minimized simultaneously. In the paper, such an approach is developed for transmission calibration. Comparison with a deterministically found design shows improved performance and validates the eligibility of the proposed design strategy.  相似文献   
138.
Given an n-node edge-weighted graph and a subset of k terminal nodes, the NP-hard (weighted) Steiner tree problem is to compute a minimum-weight tree which spans the terminals. All the known algorithms for this problem which improve on trivial O(1.62 n )-time enumeration are based on dynamic programming, and require exponential space. Motivated by the fact that exponential-space algorithms are typically impractical, in this paper we address the problem of designing faster polynomial-space algorithms. Our first contribution is a simple O((27/4) k n O(logk))-time polynomial-space algorithm for the problem. This algorithm is based on a variant of the classical tree-separator theorem: every Steiner tree has a node whose removal partitions the tree in two forests, containing at most 2k/3 terminals each. Exploiting separators of logarithmic size which evenly partition the terminals, we are able to reduce the running time to $O(4^{k}n^{O(\log^{2} k)})$ . This improves on trivial enumeration for roughly k<n/3, which covers most of the cases of practical interest. Combining the latter algorithm (for small k) with trivial enumeration (for large k) we obtain a O(1.59 n )-time polynomial-space algorithm for the weighted Steiner tree problem. As a second contribution of this paper, we present a O(1.55 n )-time polynomial-space algorithm for the cardinality version of the problem, where all edge weights are one. This result is based on a improved branching strategy. The refined branching is based on a charging mechanism which shows that, for large values of k, convenient local configurations of terminals and non-terminals exist. The analysis of the algorithm relies on the Measure & Conquer approach: the non-standard measure used here is a linear combination of the number of nodes and number of non-terminals. Using a recent result in Nederlof (International colloquium on automata, languages and programming (ICALP), pp. 713–725, 2009), the running time can be reduced to O(1.36 n ). The previous best algorithm for the cardinality case runs in O(1.42 n ) time and exponential space.  相似文献   
139.
Pancreatic beta-cells have a crucial role in the regulation of blood glucose homeostasis by the production and secretion of insulin. In type 1 diabetes (T1D), an autoimmune reaction against the beta-cells together with the presence of inflammatory cytokines and ROS in the islets leads to beta-cell dysfunction and death. This review gives an overview of proteomic studies that lead to better understanding of beta-cell functioning in T1D. Protein profiling of isolated islets and beta-cell lines in health and T1D contributed to the unraveling of pathways involved in cytokine-induced cell death. In addition, by studying the serological proteome of T1D patients, new biomarkers and beta-cell autoantigens were discovered, which may improve screening tests and follow-up of T1D development. Interestingly, an important role for PTMs was demonstrated in the generation of beta-cell autoantigens. To conclude, proteomic techniques are of indispensable value to improve the knowledge on beta-cell function in T1D and the search toward therapeutic targets.  相似文献   
140.
We present graphics processing unit (GPU) data structures and algorithms to efficiently solve sparse linear systems that are typically required in simulations of multi‐body systems and deformable bodies. Thereby, we introduce an efficient sparse matrix data structure that can handle arbitrary sparsity patterns and outperforms current state‐of‐the‐art implementations for sparse matrix vector multiplication. Moreover, an efficient method to construct global matrices on the GPU is presented where hundreds of thousands of individual element contributions are assembled in a few milliseconds. A finite‐element‐based method for the simulation of deformable solids as well as an impulse‐based method for rigid bodies are introduced in order to demonstrate the advantages of the novel data structures and algorithms. These applications share the characteristic that a major computational effort consists of building and solving systems of linear equations in every time step. Our solving method results in a speed‐up factor of up to 13 in comparison to other GPU methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号